RESTHeart issue with filters - java

I've been using RESTHeart on top of mongodb to have crud support with Mongodb's REST interface. Its working fine when I'm not using any filter, however when I tried to apply filter with the HTTP GET request as provided in the documentation I got error with the stacktrace as provided below.
Request: http://127.0.0.1:8080/inBeta/donor?filter="{'name':'john'}"
14:15:49.373 [XNIO-1 task-1] ERROR c.s.restheart.handlers.ErrorHandler
- error handling the request java.lang.ClassCastException: java.lang.String cannot be cast to org.bson.BSONObject
at com.softinstigate.restheart.db.CollectionDAO.lambda$getCollectionData$45(CollectionDAO.java:178)
~[restheart.jar:0.9.7]
at com.softinstigate.restheart.db.CollectionDAO$$Lambda$20/1288164368.accept(Unknown
Source) ~[na:na]
at java.util.ArrayDeque$DeqSpliterator.forEachRemaining(Unknown
Source)~[na:1.8.0_31]
at java.util.stream.ReferencePipeline$Head.forEach(Unknown Source) ~[na:1.8.0_31]
at com.softinstigate.restheart.db.CollectionDAO.getCollectionData(CollectionDAO.java:177)
~[restheart.jar:0.9.7]
When I didn't apply any filter its returning JSON object, however with filters its returning BSON object which RESTHeart is unable to convert as JSON response. Will appreciate any help or direction to look into the issue.
P.S. There is no tag for RESTHeart, so it would be helpful if someone could create a tag for the same.

Finally, issue resolved :)
I tried using other APIs provided in documentation and found that everything is working except filter and hence trying out the request using the below request:
http://127.0.0.1:8080/inBeta/donor?filter=%7B'username':'john'%7D
it worked. Hence the culprit is double quotes around filter query.

Related

Reading List of JSONObjects from Postman request

I have this Postman request.
With these headers:
Then my api is supposed to read the typeIdMap through a filter object with the #RequestBody annotation.
However when I try and read the typeIdMap
I get this error:
You can see in the logs that typeIdMap is empty. However when I change typeIdMap to be a List<Map<String,Object>> I am able to retrieve the values. Why is this?
Dont know the exact reasoning, but I figured it out...I was using org.json.JSONObject instead of org.json.simple.JSONObject. For some reason simple is the way to go when reading in the data.

How to force URL decode before katharsis tries to parse the filter parameters? %5B %5D

I am using katharsis 2.0.0 to build a JSON:API based service. This is being done within spring-boot 1.3.0 for use with Ember 2.0 (ember-data). Ember properly formats the filter parameters as:
/resource/filter[id]=xxxx
And URL encodes it properly as:
/resource/filter%5Bid%5d=xxxxx
However, katharsis complains that it's not formatted properly and is not URL decoding the parameters. Is there any way to either:
URL decode URL in request before it get's to katharsis
Get ember-data to not url-encode the [ and ] characters?
I believe this may actually be a bug in katharsis but need a work-around.
It was considered a bug and should be fixed. Work-around is to extend the ServletKatharsisInvokerFilter with one that overrides the getRequestQueryString method with one that does the URLDecoder.decode. Then create a JsonApiFilter that extends AbstractKatharsisFilter as a Component (BeanFactoryAware) and override the createKatharsisInvokerContext to use the new invoker filter.
katharsis-servlet and katharsis-spring v2.0.3 has been released with fixed query parameters.
https://github.com/katharsis-project/katharsis-core/issues/167

jclouds IOExpection: Error writing request body to server

We are using jclouds with Rackspace and when uploading lots of files via cloudfile api (multi threaded)
Once in while we are getting an exception on objectApi.put line (see example code at bottom)
Exception
16-Jul-2015 11:58:00.811 SEVERE [threadsPool-1] org.jclouds.logging.jdk.JDKLogger.logError error after writing 8192/streaming bytes to https://*****/****.jpg
java.io.IOException: Error writing request body to server
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3478)
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3461)
at com.google.common.io.CountingOutputStream.write(CountingOutputStream.java:53)
at com.google.common.io.ByteStreams.copy(ByteStreams.java:74)
at org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.writePayloadToConnection(JavaUrlHttpCommandExecutorService.java:297)
at org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:160)
at org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:64)
at org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:91)
at org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:90)
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:73)
at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:44)
at org.jclouds.reflect.FunctionalReflection$FunctionalInvocationHandler.handleInvocation(FunctionalReflection.java:117)
at com.google.common.reflect.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:87)
at com.sun.proxy.$Proxy176.put(Unknown Source)
at
Similar issue with S3
can be found here
Example Code
ObjectApi objectApi = cloudFiles.getObjectApi(REGION, container);
ByteSource byteSource = Files.asByteSource(file);
Payload payload = Payloads.newByteSourcePayload(byteSource);
objectApi.put(hashedName, payload);
The question:
Any one has experience some behavior like that? maybe someone has workaround for that kind of issue?
Thanks
Alon
Networks are unreliable, so expect some exceptions when using cloud services, especially when dealing with many files. Specifically for jclouds uploads, we have some example code here:
https://github.com/jclouds/jclouds-examples/tree/master/blobstore-uploader
Edit: I have also added a JIRA issue to make sure we add a test specifically for this situation in swift:
https://issues.apache.org/jira/browse/JCLOUDS-965

Getting, (#803) Some of the aliases you requested do not exist error

When i am trying to get data from facebook using graph api, i am getting this error,
{"error":
{"message":"(#803) Some of the aliases you requested do not exist: 124186682456_10151302011177457&access_token=REMOVED_ACCESS_TOKEN",
"type":"OAuthException",
"code":803}}
Can anyone help me in how to solve this problem.
Thanks in advance...
If that is an accurate representation of the error you're receiving, you're incorrectly appending the access token after a & character instead of a ?.
You need to use ? for the start of the query string, and & to separate the parameters inside the query string
e.g.
https://graph.facebook.com/124186682456_10151302011177457?access_token=ACCESS_TOKEN
Placement of access_token parameter immediately after ? has nothing to do with issue...
It could be that your URL is littered with %20 or similar stuff.
I got this error when trying a test example from Facebook API documentation, at first I have read highest voted answer, but moving access_token parameter immediately after ? didn't work, I then inspected URL more closely and found some of encoded spaces %20 that were stuck to parameter fields and some others, removing which fixed the issue.
try following URL (just replace access_token parameter with yours)
https://graph.facebook.com/search?type=place&fields=name,checkins,picture&q=cafe&center=40.7304,-73.9921&distance=1000&access_token=[YOUR_ACCESS_TOKEN]
If it works look for problem in Facebook parameter names
Hope this saves you some time.
This was happening for me with the Instagram content publishing API which also rides off the facebook graph api. It seems to happen when something is wrong with the API URL you are trying to send to Facebook via GET request.
In my case, something was happening in my code causing the user_id and access_token to not be passed to the URL before trying to query the graph API.

Unable to create AZURE VM programmatically

We are trying to create a Virtual Machine through an HTTPClient in Java using the REST API exposed by Azure. We are using the following Request URL and XMLs, but we are getting "Bad request" response.
https://management.core.windows.net/{subscription-id}/services/hostedservices/{existing hoster service name}/deployments
<Deployment xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<Name>TestVMAnandP</Name>
<Label>bXl2bQ==</Label>
<RoleList>
<Role>
<RoleName>TestVMAnandP</RoleName>
<RoleType>PersistentVMRole</RoleType>
<ConfigurationSets>
<ConfigurationSet>
<ConfigurationSetType>LinuxProvisioningConfiguration</ConfigurationSetType>
<HostName>TestVMAnandP</HostName>
<UserName>root</UserName>
<UserPassword>test</UserPassword>
</ConfigurationSet>
</ConfigurationSets>
<DataVirtualHardDisks>
<DataVirtualHardDisk>
<Lun>10</Lun>
<LogicalDiskSizeInGB>50</LogicalDiskSizeInGB>
</DataVirtualHardDisk>
</DataVirtualHardDisks>
<OSVirtualHardDisk>
<SourceImageName>srini2-srini2-2012-08-23.vhd</SourceImageName>
<MediaLink>http://sriniteststore.blob.core.windows.net/vhds/srini2-srini2-2012-08-23.vhd</MediaLink>
</OSVirtualHardDisk>
<RoleSize>ExtraSmall</RoleSize>
</Role>
</RoleList>
<VirtualNetworkName>MyNetwork</VirtualNetworkName>
</Deployment>
If we try to give a service name same as the vm name in the URL, we are getting 404 Error. We have tried most of the samples given in the web with values replaced, but everything gives us a 400 Error. It would be great if we get some help.
Errors :
Two different kind of errors i am getting :
Error 1 : When i use new <service-name> inside the below URL .management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/ ------------------------------------------------------------------- Response message--->Not Found---404
java.io.FileNotFoundException: management.core.windows.net/84cc18f5-5bdd-4c95-9d69-862c12c53507/services/hostedservices/anand/deployments
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
Caused by: java.io.FileNotFoundException: management.core.windows.net/84cc18f5-5bdd-4c95-9d69-862c12c53507/services/hostedservices/anand/deployments
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
Error 2 : when i use an existing available <service-name> in the below URL management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/
Response message--->Bad Request---400
java.io.IOException: Server returned HTTP response code: 400 for URL: management.core.windows.net/84cc18f5-5bdd-4c95-9d69-862c12c53507/services/hostedservices/azurecogservice/deployments
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
Caused by: java.io.IOException: Server returned HTTP response code: 400 for URL: management.core.windows.net/84cc18f5-5bdd-4c95-9d69-862c12c53507/services/hostedservices/azurecogservice/deployments
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
---------------------VALID XML-------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<Deployment xmlns="http://schemas.microsoft.com/windowsazure">
<Name>190bed4a</Name>
<DeploymentSlot>Production</DeploymentSlot>
<Label>190bed4a</Label>
<RoleList>
<Role>
<RoleName>SuseOpenVm_rolec8fc</RoleName>
<RoleType>PersistentVMRole</RoleType>
<ConfigurationSets>
<ConfigurationSet>
<ConfigurationSetType>LinuxProvisioningConfiguration
</ConfigurationSetType>
<HostName>SuseOpenVm_rolec8fc</HostName>
<UserName>anandsrinivasan</UserName>
<UserPassword>Cloud360</UserPassword>
<DisableSshPasswordAuthentication>false</DisableSshPasswordAuthentication>
</ConfigurationSet>
<ConfigurationSet>
<ConfigurationSetType>NetworkConfiguration</ConfigurationSetType>
<DisableSshPasswordAuthentication>false</DisableSshPasswordAuthentication>
<InputEndpoints>
<InputEndpoint>
<LocalPort>22</LocalPort>
<Name>SSH</Name>
<Port>22</Port>
<Protocol>TCP</Protocol>
</InputEndpoint>
</InputEndpoints>
</ConfigurationSet>
</ConfigurationSets>
<OSVirtualHardDisk>
<MediaLink>https://portalvhdsvf842yxvkhbg4.blob.core.windows.net/vhds/190bed4a.vhd</MediaLink>
<SourceImageName>SUSE__openSUSE-12-1-20120603-en-us-30GB.vhd</SourceImageName>
</OSVirtualHardDisk>
<RoleSize>Small</RoleSize>
</Role>
</RoleList>
<VirtualNetworkName>anand360NW</VirtualNetworkName>
</Deployment>
Whenever I'm having issues with the REST API I first try to complete what I'm trying to do through the portal. In your case I tried creating a Linux VM (TestVMAnandP) with username root and password test. I immediately noticed the following errors:
User Name 'root' is not allowed
Password should be at least 8 characters
Password should contain 3 of the following:
a lowercase character
a uppercase character
a number
a special character
I have been working on the same thing a while back.
i suggest you check out my implementation of a java REST client that consumes this API at the Cloudify GitHub repo:
https://github.com/CloudifySource/cloudify/blob/master/esc/src/main/java/org/cloudifysource/esc/driver/provisioning/azure/client/MicrosoftAzureRestClient.java
another good reference is the node.js sdk provided my Microsoft. you can browse the code and see where you went wrong :
https://github.com/WindowsAzure/azure-sdk-for-node/blob/master/lib/services/serviceManagement/servicemanagementservice.js
hope it helps
Error code 400/4xx would mean that there is something wrong with the request. And error code 404 specifically says that "The server has not found anything matching the URI given". So can you verify if the information provided in OSVertualHardDisck is correct.
<OSVirtualHardDisk>
<SourceImageName>srini2-srini2-2012-08-23.vhd</SourceImageName>
<MediaLink>http://sriniteststore.blob.core.windows.net/vhds/srini2-srini2-2012-08-23.vhd</MediaLink>
</OSVirtualHardDisk>
Update
Is the the service you are trying to use
http://msdn.microsoft.com/en-us/library/windowsazure/jj157194.aspx
You can see the error code explanations here
http://msdn.microsoft.com/en-us/library/windowsazure/ee460801.aspx
It might just be possible that the target url that you are using is not valid or expected by server.
Service Management API documentation is seriously messed up. I wouldn't be surprised if you're victim of that. One thing you can try is changing the
<LogicalDiskSizeInGB>50</LogicalDiskSizeInGB>
element to
<LogicalSizeInGB>50</LogicalSizeInGB>
under your DataVirtualHardDisk element.
The reason I'm suggesting this is because if you look at the documentation for Get Data Disk (http://msdn.microsoft.com/en-us/library/windowsazure/jj157180.aspx), it mentions that you get LogicalDiskSizeInGB as one of the response element however when you execute this function, the element you get back is LogicalSizeInGB.
Again it's a hunch. Give it a try.
Update
Another thing you might want to do is parse the exception you're getting to get more details. In .Net we get a WebException (Sorry, haven't worked in Java for a long-long time) so what I normally do is read the response of that exception and that returns me an XML with somewhat more details.
Anand, there is a problem with this API. I have implemented this in fluent management and unfortunately the XML is order dependent.
You can cycle between 400 and 500 HTTP exceptions if this is not precise. I would use powershell to write your VM creation in verbose mode and get your application to use the XML in the exact order you see it and it should be fine.
In the next few days I'll be posting up a java service management client example on http://blog.elastacloud.com which consumes a .publishsettings file to make requests which may be helpful for you.

Categories