Describe the bug
We tried to getItem using DynamoDbEnhancedClient but we got Crc32MismatchException.
Expected Behavior
I should be able to getItem
Current Behavior
getting error:
software.amazon.awssdk.core.exception.Crc32MismatchException: Expected
1657156166 as the Crc32 checksum but the actual calculated checksum
was 3693931191 at
software.amazon.awssdk.core.exception.Crc32MismatchException$BuilderImpl.build(Crc32MismatchException.java:88)
at
software.amazon.awssdk.core.internal.util.Crc32ChecksumValidatingInputStream.validateChecksum(Crc32ChecksumValidatingInputStream.java:62)
at
software.amazon.awssdk.core.internal.util.Crc32ChecksumValidatingInputStream.close(Crc32ChecksumValidatingInputStream.java:50)
at java.base/java.io.FilterInputStream.close(Unknown Source) at
software.amazon.awssdk.utils.FunctionalUtils.lambda$safeRunnable$5(FunctionalUtils.java:124)
at
software.amazon.awssdk.utils.FunctionalUtils.invokeSafely(FunctionalUtils.java:140)
at
software.amazon.awssdk.protocols.json.internal.unmarshall.JsonResponseHandler.lambda$handle$4(JsonResponseHandler.java:94)
at java.base/java.util.Optional.ifPresent(Unknown Source) at
software.amazon.awssdk.protocols.json.internal.unmarshall.JsonResponseHandler.handle(JsonResponseHandler.java:94)
at
software.amazon.awssdk.protocols.json.internal.unmarshall.JsonResponseHandler.handle(JsonResponseHandler.java:36)
at
software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonResponseHandler.handle(AwsJsonResponseHandler.java:44)
at
software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.lambda$handle$0(MetricCollectingHttpResponseHandler.java:52)
at
software.amazon.awssdk.core.internal.util.MetricUtils.measureDurationUnsafe(MetricUtils.java:64)
at
software.amazon.awssdk.core.http.MetricCollectingHttpResponseHandler.handle(MetricCollectingHttpResponseHandler.java:52)
at
software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler$Crc32ValidationResponseHandler.handle(AwsSyncClientHandler.java:94)
at
software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$resultTransformationResponseHandler$7(BaseClientHandler.java:287)
at
software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleSuccessResponse(CombinedResponseHandler.java:97)
at
software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handleResponse(CombinedResponseHandler.java:72)
at
software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:59)
at
software.amazon.awssdk.core.internal.http.CombinedResponseHandler.handle(CombinedResponseHandler.java:40)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:40)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:30)
at
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:73)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:78)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:40)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:50)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:36)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:80)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:36)
at
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at
software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
at
software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:48)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:31)
at
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at
software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
at
software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
at
software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:193)
at
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:135)
at
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:161)
at
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:114)
at
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:169)
at
software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:95)
at
software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
at
software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:55)
at
software.amazon.awssdk.services.dynamodb.DefaultDynamoDbClient.getItem(DefaultDynamoDbClient.java:3107)
at
software.amazon.awssdk.enhanced.dynamodb.internal.operations.CommonOperation.execute(CommonOperation.java:114)
at
software.amazon.awssdk.enhanced.dynamodb.internal.operations.TableOperation.executeOnPrimaryIndex(TableOperation.java:59)
at
software.amazon.awssdk.enhanced.dynamodb.internal.client.DefaultDynamoDbTable.getItem(DefaultDynamoDbTable.java:139)
at
software.amazon.awssdk.enhanced.dynamodb.internal.client.DefaultDynamoDbTable.getItem(DefaultDynamoDbTable.java:146)
at
software.amazon.awssdk.enhanced.dynamodb.internal.client.DefaultDynamoDbTable.getItem(DefaultDynamoDbTable.java:151)
Reproduction Steps
// https://www.http4k.org/api/org.http4k.client/-ok-http/
val httpClient: HttpHandler = OkHttp()
val awsHttpClient = AwsSdkClient(httpClient)
val dynamoDbClient: DynamoDbClient =
DynamoDbClient.builder()
.region(Region.of(environment.getAwsRegion()))
.httpClient(awsHttpClient)
.build()
val enhancedClient: DynamoDbEnhancedClient =
DynamoDbEnhancedClient.builder()
.dynamoDbClient(dynamoDbClient)
.build()
val tableName = ...
val schema = ...
val table = enhancedClient.table(tableName, schema)
val key = ..
table.getItem(key)
Possible Solution
This has happened in aws-sdk-java, maybe it is missed in v2?
aws/aws-sdk-java#1018
AWS Java SDK version used
2.17.27
JDK version used
java 11
Operating System and version
Amazon Linux 2
Reported to github/aws/aws-sdk-java-v2
Related but not the same stackoverflow question
If you are using Okhttp as client, you can add a custom header configuration: Accept-Encoding: gzip
You can do something like this in kotlin:
val httpClient: HttpHandler = OkHttp()
val awsHttpClient = AwsSdkClient(httpClient)
val dynamoDbClient: DynamoDbClient =
DynamoDbClient.builder()
.region(Region.of(environment.getAwsRegion()))
.httpClient(awsHttpClient)
.overrideConfiguration { o -> o.putHeader("Accept-Encoding", "gzip") }
.build()
val enhancedClient: DynamoDbEnhancedClient =
DynamoDbEnhancedClient.builder()
.dynamoDbClient(dynamoDbClient)
.build()
val tableName = ...
val schema = ...
val table = enhancedClient.table(tableName, schema)
val key = ..
table.getItem(key)
I have not seen others use OkHttp for their AWS clients before, and I believe that this HTTP client uses gzip compression by default. To change the configuration you would need to write a network interceptor as described here.
I suggest first building with the default HTTP client, which just requires you to remove 2 lines of code. That will allow you to understand if OkHTTP is in fact the issue.
Related
my code here:
function MsgReceivedInPastHourchannelId,connectorID, status) {
var client = new com.mirth.connect.client.core.Client('https://127.0.0.1:443/');
try{
var loginStatus = client.login('userID', 'psw');
} catch(ex) {
client.close();
throw 'Unable to log-on the server , Error: ' + ex.message;
}
var filter = new com.mirth.connect.model.filters.MessageFilter;
var calendar = java.util.Calendar;
var startDate = new calendar.getInstance();
var endDate = new calendar.getInstance();
..
.. logic to set start/end date
..
filter.setStartDate(startDate);
filter.setEndDate(endDate);
var statuses = new java.util.HashSet();
var Status = com.mirth.connect.donkey.model.message.Status;
var list = Lists.list().append(connectorID);
var metricStatus = Status.SENT;
statuses.add(metricStatus);
filter.setStatuses(statuses);
filter.setIncludedMetaDataIds(list) ;
var nCount =client.getMessageCount(channelId, filter);
client.close();
return nCount
}
reference :
Mirth getMessageCount using Javascript not working
Mostly it works fine, but it randomly throw exception at line number 218, this is
var client = new com.mirth.connect.client.core.Client('https://127.0.0.1:443/')
anyone have experience or solution to get rid of such error:
[2021-06-30 02:00:02,000] ERROR (com.mirth.connect.connectors.js.JavaScriptDispatcher:193):
Error evaluating JavaScript Writer (JavaScript Writer "Submit Hx channel status to DataDog" on channel 1xxxxxxxxxxxxxxxx4).
com.mirth.connect.server.MirthJavascriptTransformerException: CHANNEL:
ChannelStatus-Poller-CountCONNECTOR:
Submit Hxchannel status to DataDogSCRIPT SOURCE:
JavaScript WriterSOURCE CODE:
218: var client = new com.mirth.connect.client.core.Client('https://127.0.0.1:443/') 221: // log on to the server222:
try{LINE NUMBER: 218 DETAILS:
Wrapped java.lang.IllegalStateException: zip file closed
at a7fa25a9-af95-4410-bb4f-f4f08ae0badb:218 (MsgReceivedInPastHour)
at a7fa25a9-af95-4410-bb4f-f4f08ae0badb:1013 (doScript)
at a7fa25a9-af95-4410-bb4f-f4f08ae0badb:1033
at com.mirth.connect.connectors.js.JavaScriptDispatcher$JavaScriptDispatcherTask.doCall(JavaScriptDispatcher.java:184)
at com.mirth.connect.connectors.js.JavaScriptDispatcher$JavaScriptDispatcherTask.doCall(JavaScriptDispatcher.java:122)
at com.mirth.connect.server.util.javascript.JavaScriptTask.call(JavaScriptTask.java:113)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)Caused by: java.lang.IllegalStateException: zip file closed
at java.util.zip.ZipFile.ensureOpen(ZipFile.java:686)
at java.util.zip.ZipFile.access$200(ZipFile.java:60)
at java.util.zip.ZipFile$ZipEntryIterator.hasNext(ZipFile.java:508)
at java.util.zip.ZipFile$ZipEntryIterator.hasMoreElements(ZipFile.java:503)
at java.util.jar.JarFile$JarEntryIterator.hasNext(JarFile.java:253)
at java.util.jar.JarFile$JarEntryIterator.hasMoreElements(JarFile.java:262)
at org.reflections.vfs.ZipDir$1$1.computeNext(ZipDir.java:30)
at org.reflections.vfs.ZipDir$1$1.computeNext(ZipDir.java:26)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:141)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:136)
at org.reflections.Reflections.scan(Reflections.java:240)
at org.reflections.Reflections.scan(Reflections.java:204)
at org.reflections.Reflections.<init>(Reflections.java:129)
at org.reflections.Reflections.<init>(Reflections.java:170)
at org.reflections.Reflections.<init>(Reflections.java:143)
at com.mirth.connect.client.core.Client.<init>(Client.java:176)
at com.mirth.connect.client.core.Client.<init>(Client.java:143)
at sun.reflect.GeneratedConstructorAccessor159.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.mozilla.javascript.MemberBox.newInstance(MemberBox.java:159)
at org.mozilla.javascript.NativeJavaClass.constructInternal(NativeJavaClass.java:266)
at org.mozilla.javascript.NativeJavaClass.constructSpecific(NativeJavaClass.java:205)
at org.mozilla.javascript.NativeJavaClass.construct(NativeJavaClass.java:166)
at org.mozilla.javascript.Interpreter.interpretLoop(Interpreter.java:1525)
at org.mozilla.javascript.Interpreter.interpret(Interpreter.java:815)
at org.mozilla.javascript.InterpretedFunction.call(InterpretedFunction.java:109)
at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:405)
at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3508)
at org.mozilla.javascript.InterpretedFunction.exec(InterpretedFunction.java:120)
at com.mirth.connect.server.util.javascript.JavaScriptTask.executeScript(JavaScriptTask.java:150)
at com.mirth.connect.connectors.js.JavaScriptDispatcher$JavaScriptDispatcherTask.doCall(JavaScriptDispatcher.java:149)
... 6 more
Issue could not be solved in Mirth connector Admin environment, to solve this issue, I used DB query instead.
go to Mirth DB, you can find related tables and it is more safe to query such DB.
The reason not to invoke,
answer from Mirth slack general channel,
"When using the Client class you're pretty much looping back to the server, since all the code is executed on the server anyways."
so always didn't com.mirth.connect.client.core.Client class in Mirth code.
Issue closed.
I have a big list of string (140866 elements) which takes some times to compute. Once computed I want to use this list in UDF or in map of my DataFrame. I follow some tutorials and I found this example
val states = List("NY","New York","CA","California","FL","Florida")
val countries = Map(("USA","United States of America"),("IN","India"))
val broadcastStates = spark.sparkContext.broadcast(states)
val broadcastCountries = spark.sparkContext.broadcast(countries)
val data = Seq(("James","Smith","USA","CA"),
("Michael","Rose","USA","NY"),
("Robert","Williams","USA","CA"),
("Maria","Jones","USA","FL")
)
val columns = Seq("firstname","lastname","country","state")
import spark.sqlContext.implicits._
val df = data.toDF(columns:_*)
val df2 = df.map(row=>{
val country = row.getString(2)
val state = row.getString(3)
val fullCountry = broadcastCountries.value.get(country).get
val fullState = broadcastStates.value(0)
(row.getString(0),row.getString(1),fullCountry,fullState)
}).toDF(columns:_*)
df2.show(false)
which works fine.
But when I try to use my list I got this error
org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:403)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:393)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1.apply(RDD.scala:850)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1.apply(RDD.scala:849)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.mapPartitionsWithIndex(RDD.scala:849)
at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:630)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:156)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:283)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:375)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3389)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2550)
at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3370)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3369)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2550)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2764)
at org.apache.spark.sql.Dataset.getRows(Dataset.scala:254)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:291)
at org.apache.spark.sql.Dataset.show(Dataset.scala:753)
at org.apache.spark.sql.Dataset.show(Dataset.scala:730)
... 54 elided
Caused by: java.io.NotSerializableException: org.apache.spark.SparkContext
Serialization stack:
- object not serializable (class: org.apache.spark.SparkContext, value: org.apache.spark.SparkContext#6ebc6ccc)
I get my list with
val myList = spark.read.option("header",true)
.csv(NER_PATH_S3)
.na.drop()
.filter(col("label") =!= "VALUE" )
.groupBy("keyword")
.agg(sum("n_occurences").alias("n_occurences"))
.filter(col("n_occurences") > 2)
.filter($"keyword".rlike("[^0-9]+"))
.select("keyword")
.collect()
.map(x => x(0).toString)
.toList
val myListBroadcast = sc.broadcast(myList)
Which I made sure I have exactly the same type as my example, I also try to reduce the size of my list by slicing it.
According to me instead of using
sc.broadcast(myList)
you can use
spark.sparkContext.broadcast(myList)
and that should work.
I had faced the similar issue and when I changed the code to what I have suggested it works like a charm.
Happy Learning.
So I am trying to connect to an ElasticSearch cluster, following these examples:
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/transport-client.html
https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/java-docs-index.html
This is my code:
def saveToES(message: String) {
println("start saving")
// val client = TransportClient.builder().settings(settings).build().addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("xxx.com"), 9200))
// prepare Adresses
val c = InetAddress.getByName("xxx.com")
println("inetadress done")
val d = new InetSocketTransportAddress(c, 9200)
println("socketadress done")
val settings = Settings.settingsBuilder().put("cluster.name", "CLUSTER").build();
println("settings created")
val a = TransportClient.builder()
println("Builder created")
val aa = a.settings(settings)
println("settings set")
val b = a.build
println("client built")
val client = b.addTransportAddress(d)
println("adress added to client")
val response = client.prepareIndex("performance_logs", "performance").setSource(message).get()
//
println(response.toString)
//
// // on shutdown
//
// client.close();
}
And the program get's stuck on building the client (val b = a.build), so my last print is ("settings set"). No error or whatever, just stuck. For firewall reasons I have to deploy the whole thing in a jar and execute on a server, so I can't really debug with the IDE.
Any idea what the problem is?
I have a project in Java. This project has a class com.xyz.api.base.models.mongo.Member.
I want to import this Java project to a Scala project to use Member class.
However, I got this error (the library is already downloaded to Scala dependencies):
java.lang.RuntimeException: java.lang.ClassNotFoundException: models.mongo.Member
The strange thing is that there is not compilation error. The error above only happens at runtime. Furthermore, the error message does not mention com.xyz.api.base as the base package of models.mongo.Member.
My code:
import com.redmart.api.base.models.mongo.Member
import com.redmart.api.base.utils.RedisCacheImpl
import redis.RedisClient
object Redis extends App {
implicit val akkaSystem = akka.actor.ActorSystem()
val host: String = "127.0.0.1"
val port: Int = 6379
val db: Int = 0
val timeout: Long = 10000L
val key = "a2IxSE5kdW9HRHZUe"
var redisCacheImpl: RedisCacheImpl = _
try {
RedisCacheImpl.configRedis(host, port, db, timeout)
redisCacheImpl = RedisCacheImpl.getInstance()
val obj = redisCacheImpl.get(key)
val member = obj.asInstanceOf[Member]
println(s"member id ${member.getMemberId}")
}
Thank you for your help.
In this case spring-boot's version 1.2.3.RELEASE use mongo-java-driver 2.12.5. for more details go through this documentation :Link
I am trying to reproduce the example from the Wiki tutorial for Project Wonder REST:
community.org/display/WEB/Your+First+Rest+Project#YourFirstRestProject-Addingpostsandauthorswithcurl
I am the point where you add entries in the DB with curl (I couldn't do it, I added them via SQL).
I am trying to run the curl command to retrieve entries and get an error "Empry reply from server". The console reports the following:
Request start for URI /cgi-bin/WebObjects/BlogTutorial.woa/ra/blogEntries.json
Headers{accept = ("*/*"); host = ("127.0.0.1:45743"); user-agent = ("curl/7.38.0"); }
[2015-8-14 17:20:19 CEST] <WorkerThread14> <er.rest.routes.ERXRouteRequestHandler>: Exception while handling action named "index" on action class "your.app.rest.controllers.BlogEntryController" :com.webobjects.foundation.NSForwardException [java.lang.reflect.InvocationTargetException] null:java.lang.reflect.InvocationTargetException
_ignoredPackages:: ("com.webobjects", "java.applet", "java.awt", "java.awt.datatransfer", "java.awt.event", "java.awt.image", "java.beans", "java.io", "java.lang", "java.lang.reflect", "java.math", "java.net", "java.rmi", "java.rmi.dgc", "java.rmi.registry", "java.rmi.server", "java.security", "java.security.acl", "java.security.interfaces", "java.sql", "java.text", "java.util", "java.util.zip")
Headers{cache-control = ("private", "no-cache", "no-store", "must-revalidate", "max-age=0"); expires = ("Fri, 14-Aug-2015 15:20:19 GMT"); content-type = ("text/html"); content-length = ("9296"); pragma = ("no-cache"); x-webobjects-loadaverage = ("1"); date = ("Fri, 14-Aug-2015 15:20:19 GMT"); set-cookie = (); }
The request start and both Headers messages are mine, through an override of dispatchRequest.
Any ideas?