I am trying to get the index status for a very large index using the Java API. In the background, I am updating the same index with large amounts of data
I have tried
IndicesStatsResponse indicesStatsResponse = client.admin().indices()
.prepareStats(index_name).all().execute().actionGet();
but this does not return the correct size because it is a very large index. However using the REST call, I got the correct answer.
GET /index_name/_stats
Perhaps I need to use some kind of listener mechanism, so I tried
IndicesStatsRequest req = new IndicesStatsRequest();
req.all();
client.admin().indices().stats(req, new ActionListener<IndicesStatsResponse>()
{
#Override
public void onResponse(IndicesStatsResponse response)
{
long s = response.getIndex(indexName).getTotal().getStore().getSizeInBytes();
System.out.println(indexName + " " + s);
}
..
});
but this threw org.elasticsearch.common.util.concurrent.EsRejectedExecutionException
I also tried
IndicesStatsRequest req = new IndicesStatsRequest();
req.all();
ActionFuture<IndicesStatsResponse> statsResponseFeature = client.admin().indices().stats(req.clear().flush(true).refresh(true));
IndicesStatsResponse statsResponse = statsResponseFeature.get(1, TimeUnit.MINUTES);
But this did not retrieve any useful information.
Strangely enough, when I run it in debug mode in Eclipse, it works perfectly. So maybe there is some flush mechanism I am missing.
What is the way out?
Related
I need to work on ajax response, that is one of responses received upon visiting a page. I use selenium dev tools and java. I create a listener, that intercepts a specific request and then I want to work on response it brings. However I need to setup static wait, or else selenium don't have time to save RequestId. I read Chrome Dev Tools documentation, but it's a new thing for me. I wonder if there is a method that would allow me to wait for this call to be completed, other than the static wait.
Here is my code:
#Test(groups = "test")
public void x() throws InterruptedException, JsonProcessingException {
User user = User.builder();
ManageAccountStep manageAccountStep = new ManageAccountStep(getDriver());
DashboardPO dashboardPO = new DashboardPO(getDriver());
manageAccountStep.login(user);
DevTools devTools = ((HasDevTools) getDriver()).maybeGetDevTools().orElseThrow();
devTools.createSessionIfThereIsNotOne();
devTools.send(Network.enable(Optional.empty(), Optional.empty(), Optional.empty()));
// end of boilerplate
final RequestId[] id = new RequestId[1];
devTools.addListener(Network.responseReceived(), response -> {
log.info(response.getResponse().getUrl());
if (response.getResponse().getUrl().contains(DESIRED_URL)){
id[0] = response.getRequestId();
}
});
dashboardPO
.clickLink(); // here is when my DESIRED_URL happens
Utils.sleep(5000); // Something like Thread.sleep(5000)
String responseBody = devTools.send(Network.getResponseBody(id[0])).getBody();
// some operations on responseBody
devTools.clearListeners();
devTools.disconnectSession();
}
If I don't use 5 seconds wait id variable gets never assigned and I null pointer exception requestId is required. During these 5 seconds log.info prints all api calls that are happening and it almost always finds my id. I would like to refrain from static wait though. I am thinking about something similiar to maybe jQuery.active()==0, but my page doesn't use jQuery.
You may try custom function Explicit Wait. Something like this:
public String getResponseBody(WebDriver driver, DevTools devTools) {
return new WebDriverWait(driver,5)
.ignoring(NullPointerException.class)
.until(driver ->
devTools.send(Network.getResponseBody(id[0])).getBody());
}
So, it won't wait for all 5 seconds. The moment it got the data, it would come of out of the until method. Also add whichever Exception that was coming up here.
Has put these lines of code as separate method because, devTools object is locally defined. In order to use them inside this anonymous inner function, it has to be final or effectively final.
I seem to run into this issue when running tests in parallel (and headless) and trying to capture the requests and responses, I get:
{"No data found for resource with given identifier"},"sessionId" ...
However, now .until seems to only take ExpectedCondition
So a similar solution (to the accepted answer), but without using "WebDriverWait.until" that I use is:
public static String getResponseBody(DevTools devTools, RequestId id) {
String requestPostData = "";
LocalDateTime then = LocalDateTime.now();
String err = "";
Integer it = 0;
while (true) {
err = "";
try{requestPostData = devTools.send(Network.getResponseBody(id)).getBody();} catch( Exception e){err = e.getMessage();};
if (requestPostData != null && !requestPostData.equals("")) {break;}
if (err.equals("")) {break;} // if we don't have an error message, its quite possible the responseBody really is an empty string
long timeTaken = ChronoUnit.SECONDS.between(then, LocalDateTime.now());
if (timeTaken >= 5) {requestPostData = err + ", timeTaken:" + timeTaken; break;}
if(it > 0) {TimeUnit.SECONDS.sleep(it);} // I prefer waiting longer and longer, avoiding stack overflows
it++;
}
return requestPostData;
}
It just loops until it doesn't error, and returns the string back as soon as it can (but I actually set timeTaken >= 60 due to many parallel requests)
I work on university project in java. I have to download attachments from new emails using GMAIL API.
I successfully connected to gmail account using OAuth 2.0 authorization.
private static final List<String> SCOPES = Collections.singletonList(GmailScopes.GMAIL_READONLY);
I tried to get unseen mails using
ListMessagesResponse listMessageResponse = service.users().messages().list(user).setQ("is:unseen").execute();
listMessageResponse is not null but when I call method .getResultSizeEstimate() it returns 0
also I tried to convert listMessageResponse to List < Message > (I guess this is more usable) using
List<Message> list = listMessageResponse.getMessages();
But list launches NullPointerException
Then tried to get each attachment with
for(Message m : list) {
List<MessagePart> part = m.getPayload().getParts();
for(MessagePart p: part) {
if(p.getFilename()!=null && p.getFilename().length()>0) {
System.out.println(p.getFilename()); // Just to check attachment filename
}
}
}
Is my approach correct (if not how to fix it) and how should I download those attachments.
EDIT 1:
Fixed q parameter, I mistakenly wrote is:unseen instead of is:unread.
Now app reaches unread mails successfully.
(For example there was two unread mails and both successfully reached, I can get theirs IDs easy).
Now this part trows NullPointerException
List<MessagePart> part = m.getPayload().getParts();
Both messages have attachments and m is not null (I get ID with .getID())
Any ideas how to overcome this and download attachment?
EDIT 2:
Attachments Downloading part
for(MessagePart p : parts) {
if ((p.getFilename() != null && p.getFilename().length() > 0)) {
String filename = p.getFilename();
String attId = p.getBody().getAttachmentId();
MessagePartBody attachPart;
FileOutputStream fileOutFile = null;
try {
attachPart = service.users().messages().attachments().get("me", p.getPartId(), attId).execute();
byte[] fileByteArray = Base64.decodeBase64(attachPart.getData());
fileOutFile = new FileOutputStream(filename); // Or any other dir
fileOutFile.write(fileByteArray);
fileOutFile.close();
}catch (IOException e) {
System.out.println("IO Exception processing attachment: " + filename);
} finally {
if (fileOutFile != null) {
try {
fileOutFile.close();
} catch (IOException e) {
// probably doesn't matter
}
}
}
}
}
Downloading working like charm, tested app with different type of emails.
Only thing left is to change label of unread message (that was reached by app) to read. Any tips how to do it?
And one tiny question:
I want this app to fetch mails on every 10 minutes using TimerTask abstract class. Is there need for manual "closing" of connection with gmail or that's done automatically after run() method iteration ends?
#Override
public void run(){
// Some fancy code
service.close(); // Something like that if even exists
}
I don't think ListMessagesResponse ever becomes null. Even if there are no messages that match your query, at least resultSizeEstimate will get populated in the resulting response: see Users.messages: list > Response.
I think you are using the correct approach, just that there is no message that matches your query. Actually, I never saw is:unseen before. Did you mean is:unread instead?
Update:
When using Users.messages: list only the id and the threadId of each message is populated, so you cannot access the message payload. In order to get the full message resource, you have to use Users.messages: get instead, as you can see in the referenced link:
Note that each message resource contains only an id and a threadId. Additional message details can be fetched using the messages.get method.
So in this case, after getting the list of messages, you have to iterate through the list, and do the following for each message in the list:
Get the message id via m.getId().
Once you have retrieved the message id, use it to call Gmail.Users.Messages.Get and get the full message resource. The retrieved message should have all fields populated, including payload, and you should be able to access the corresponding attachments.
Code sample:
List<Message> list = listMessageResponse.getMessages();
for(Message m : list) {
Message message = service.users().messages().get(user, m.getId()).execute();
List<MessagePart> part = message.getPayload().getParts();
// Rest of code
}
Reference:
Class ListMessagesResponse
Users.messages: list > Response
I would like to know how to get the API end point of a TestStep in SoapUI Xml using Java.
I have used the following,
for (int i=0; i<numberOfTestSteps; i++) {
WsdlTestStep testStep = testCase.getTestStepAt(i);
WsdlTestCaseRunner runner = new WsdlTestCaseRunner(testCase, new StringToObjectMap());
runner.runTestStep(testStep);
List<TestStepResult> resultList = runner.getResults();
for (TestStepResult result : resultList) {
String endPoint = ((MessageExchange)result).getEndpoint();
System.out.println("End Point = " + endPoint);
}
}
It only gives "www.test.com:8080". But I need the API end point as in the image.
Please someone help me to solve this.
Below should give you what you are looking for:
String resourcePath = ((MessageExchange)result).getResource().getFullPath();
System.out.println("Resource Path = " + resourcePath);
You may look at respective SoapUI's API
There is very simply way too if you wish to show that value from with SoapUI Project itself.
In the test case, there might be a REST Request Test step type. Add a Script Assertion as shown below:
log.info messageExchange.endpoint
I am learning Amazon Cloud Search but I couldn't find any code in either C# or Java (though I am creating in C# but if I can get code in Java then I can try converting in C#).
This is just 1 code I found in C#: https://github.com/Sitefinity-SDK/amazon-cloud-search-sample/tree/master/SitefinityWebApp.
This is 1 method i found in this code:
public IResultSet Search(ISearchQuery query)
{
AmazonCloudSearchDomainConfig config = new AmazonCloudSearchDomainConfig();
config.ServiceURL = "http://search-index2-cdduimbipgk3rpnfgny6posyzy.eu-west-1.cloudsearch.amazonaws.com/";
AmazonCloudSearchDomainClient domainClient = new AmazonCloudSearchDomainClient("AKIAJ6MPIX37TLIXW7HQ", "DnrFrw9ZEr7g4Svh0rh6z+s3PxMaypl607eEUehQ", config);
SearchRequest searchRequest = new SearchRequest();
List<string> suggestions = new List<string>();
StringBuilder highlights = new StringBuilder();
highlights.Append("{\'");
if (query == null)
throw new ArgumentNullException("query");
foreach (var field in query.HighlightedFields)
{
if (highlights.Length > 2)
{
highlights.Append(", \'");
}
highlights.Append(field.ToUpperInvariant());
highlights.Append("\':{} ");
SuggestRequest suggestRequest = new SuggestRequest();
Suggester suggester = new Suggester();
suggester.SuggesterName = field.ToUpperInvariant() + "_suggester";
suggestRequest.Suggester = suggester.SuggesterName;
suggestRequest.Size = query.Take;
suggestRequest.Query = query.Text;
SuggestResponse suggestion = domainClient.Suggest(suggestRequest);
foreach (var suggest in suggestion.Suggest.Suggestions)
{
suggestions.Add(suggest.Suggestion);
}
}
highlights.Append("}");
if (query.Filter != null)
{
searchRequest.FilterQuery = this.BuildQueryFilter(query.Filter);
}
if (query.OrderBy != null)
{
searchRequest.Sort = string.Join(",", query.OrderBy);
}
if (query.Take > 0)
{
searchRequest.Size = query.Take;
}
if (query.Skip > 0)
{
searchRequest.Start = query.Skip;
}
searchRequest.Highlight = highlights.ToString();
searchRequest.Query = query.Text;
searchRequest.QueryParser = QueryParser.Simple;
var result = domainClient.Search(searchRequest).SearchResult;
//var result = domainClient.Search(searchRequest).SearchResult;
return new AmazonResultSet(result, suggestions);
}
I have already created domain in Amazon Cloud Search using AWS console and uploaded document using Amazon predefine configuration option that is movie Imdb json file provided by Amazon for demo.
But in this method I am not getting how to use this method, like if I want to search Director name then how do I pass in this method as because this method parameter is of type ISearchQuery?
I'd suggest using the official AWS CloudSearch .NET SDK. The library you were looking at seems fine (although I haven't look at it any detail) but the official version is more likely to expose new CloudSearch features as soon as they're released, will be supported if you need to talk to AWS support, etc, etc.
Specifically, take a look at the SearchRequest class -- all its params are strings so I think that obviates your question about ISearchQuery.
I wasn't able to find an example of a query in .NET but this shows someone uploading docs using the AWS .NET SDK. It's essentially the same procedure as querying: creating and configuring a Request object and passing it to the client.
EDIT:
Since you're still having a hard time, here's an example. Bear in mind that I am unfamiliar with C# and have not attempted to run or even compile this but I think it should at least be close to working. It's based off looking at the docs at http://docs.aws.amazon.com/sdkfornet/v3/apidocs/
// Configure the Client that you'll use to make search requests
string queryUrl = #"http://search-<domainname>-xxxxxxxxxxxxxxxxxxxxxxxxxx.us-east-1.cloudsearch.amazonaws.com";
AmazonCloudSearchDomainClient searchClient = new AmazonCloudSearchDomainClient(queryUrl);
// Configure a search request with your query
SearchRequest searchRequest = new SearchRequest();
searchRequest.Query = "potato";
// TODO Set your other params like parser, suggester, etc
// Submit your request via the client and get back a response containing search results
SearchResponse searchResponse = searchClient.Search(searchRequest);
When I delete my neo4j database after my tests like this
public static final DatabaseOperation clearDatabaseOperation = new DatabaseOperation() {
#Override public void performOperation(GraphDatabaseService db) {
//This is deprecated on the GraphDatabaseService interface,
// but the alternative is not supported by implementation (RestGraphDatabase)
for (Node node : db.getAllNodes()) {
for (Relationship relationship : node.getRelationships()) {
relationship.delete();
}
boolean notTheRootNode = node.getId() != 0;
if (notTheRootNode) {
node.delete();
}
}
When querying the database through an ajax search (i.e searching on an empty database it returns an internal 500 error)
localhost:9000/search-results?keywords=t 500 Internal Server Error
197ms
However if I delete the database manually like this
start r=relationship(*) delete r;
start n=node(*) delete n;
No exception is thrown
Its most likely an issue with my code at a lower level in the call and return.
Just wandering why the error only works on one of the scenarios above and not both
Use cypher,
you should probably state more obviously that you use the rest-graph-database.
Are you querying after the deletion or during it?
Please check your logs in data/graph.db/messages.log and data/log/console.log to find the error cause.
Perhaps you can also look at the response body of the http-500 request
As per your error I guess your data is getting corrupted after deletion.
I have used same code like yours and deleted the nodes, except I put the Iterator in transaction and shut down the database after opetation.
e.g.
Transaction _tx = _db.beginTx();
try {
for ( your conditions){
Your code
}
_tx.success();
} catch (Exception e) {
_logger.error(e.getMessage());
}finally{
_tx.finish();
_db.shutdown();
graphDbFactory.cleanUp();
}
Hope it will work for you.