Writing Java code snippets in Visual Studio Code in json - java

I'd like to add some Java code snippets for Visual Studio Code, to speed up things that I commonly use, e.g. System.out.println() but the snippets are written in json, which I have no experience of. Can anybody help me with how to structure these Java snippets, and how you actually use them when programming?
This was my own attempt:
"Print to console":
{
"prefix": "System",
"body" : ["System.out.println("],
"description": "Print to the console"
}
Though I don't know whether I've written it wrong or whether I'm not accessing the snippet correctly.

From https://code.visualstudio.com/docs/editor/userdefinedsnippets
You can define your own snippets for specific languages. To open up a snippet file for editing, open User Snippets under File > Preferences (Code > Preferences on Mac) and select the language for which the snippets should appear.
After open up java.json using the step described above, you should see an example commented.
To fix your snippet, it should be:
"Print to Console": {
"prefix": "System",
"body": [
"System.out.println(\"$1\");"
],
"description": "Print to the console"
}
As you type System on vscode, you will see the IntelliSense to give you the suggestion.

Related

How to bind Build>Project to a key in Javatar (Sublime Text 3)?

I want to bind ctrl+shift+b to Build All Projects in Javatar but there is no documentation on google to do that. Here are my key binding settings/progress so far:
{
"keys": ["ctrl+shift+b"],
"command": "javatar_build_?"
}
What goes into the command argument?? Please help. Your help is deeply appreciated.
Edit your keymap entry to look like this:
{
"keys": ["ctrl+shift+b"],
"command": "javatar_build",
"args": {
"build_type": "project"
}
}
and you should* be good to go.
* I don't program in Java and I don't have any projects laying around to test, so YMMV. Let me know how it works.

Executor is not displayed in Allure reports

Executor is not getting displayed in Allure reports. I have created a executor.json file which has only 1 attribute i.e., tester as we can see the code below
executor.json
{"Tester":"Suhail"}
when I generate the report I am not getting Executor field it is showing as Unknow as you can see the screenshot attached below
I am using Allure version-2.13.2
can anyone help me where I am going wrong
As you already found out the Executor is display if you have a executor.json file in your allure-results folder when you generate your report.
This file is usually generated by your builder, for example Jenkins with the plugin allure.
If you want to manually add theses informations here what the file look like:
executor.json
{
"name": "Jenkins",
"type": "jenkins",
"url": "http://example.org",
"buildOrder": 13,
"buildName": "allure-report_deploy#13",
"buildUrl": "http://example.org/build#13",
"reportUrl": "http://example.org/build#13/AllureReport",
"reportName": "Demo allure report"
}
I was going through this and found out interesting so posting it as an answer,if anyone has more details on this please paste your answer.
So here is my findings,
in executor.json file, we need to have general syntax like below
{"name":"Suhail", // this will print the tester name on the report
"buildName":"Give the project Name", // if this attribute is not given that **Unknown** is displayed
"type":"jenkins" // if this attribute is not given we will be getting a user icon next to the name else a hat icon will be displayed
}
I could find only this much, if anyone know how to enter more records of executors then please let me know.

Netlogo Api Controller - Get Table View

I am using Netlogo Api Controller With spring boot
this my code (i got it from this link )
HeadlessWorkspace workspace = HeadlessWorkspace.newInstance();
try {
workspace.open("models/Residential_Solar_PV_Adoption.nlogo",true);
workspace.command("set number-of-residences 900");
workspace.command("set %-similar-wanted 7");
workspace.command("set count-years-simulated 14");
workspace.command("set number-of-residences 500");
workspace.command("set carbon-tax 13.7");
workspace.command("setup");
workspace.command("repeat 10 [ go ]");
workspace.command("reset-ticks");
workspace.dispose();
workspace.dispose();
}
catch(Exception ex) {
ex.printStackTrace();
}
i got this result in the console:
But I want to get the table view and save to database. Which command can I use to get the table view ?
Table view:
any help please ?
If you can clarify why you're trying to generate the data this way, I or others might be able to give better advice.
There is no single NetLogo command or NetLogo API method to generate that table, you have to use BehaviorSpace to get it. Here are some options, listed in rough order of simplest to hardest.
Option 1
If possible, I'd recommend just running BehaviorSpace experiments from the command line to generate your table. This will get you exactly the same output you're looking for. You can find information on how to do that in the NetLogo manual's BehaviorSpace guide. If necessary, you can run NetLogo headless from the command line from within a Java program, just look for resources on calling out to external programs from Java, maybe with ProcessBuilder.
If you're running from within Java in order to setup and change the parameters of your BehaviorSpace experiments in a way that you cannot do from within the program, you could instead generate experiment XML files in Java to pass to NetLogo at the command line. See the docs on the XML format.
Option 2
You can recreate the contents of the table using the CSV extension in your model and adding a few more commands to generate the data. This will not create the exact same table, but it will get your data output in a computer and human readable format.
In pure NetLogo code, you'd want something like the below. Note that you can control more of the behavior (like file names or the desired variables) by running other pre-experiment commands before running setup or go in your Java code. You could also run the CSV-specific file code from Java using the controlling API and leave the model unchanged, but you'll need to write your own NetLogo code version of the csv:to-row primitive.
globals [
;; your model globals here
output-variables
]
to setup
clear-all
;;; your model setup code here
file-open "my-output.csv"
; the given variables should be valid reporters for the NetLogo model
set output-variables [ "ticks" "current-price" "number-of-residences" "count-years-simulated" "solar-PV-cost" "%-lows" "k" ]
file-print csv:to-row output-variables
reset-ticks
end
to go
;;; the rest of your model code here
file-print csv:to-row map [ v -> runresult v ] output-variables
file-flush
tick
end
Option 3
If you really need to reproduce the BehaviorSpace table export exactly, you can try to run a BehaviorSpace experiment directly from Java. The table is generated by this code but as you can see it's tied in with the LabProtocol class, meaning you'll have to setup and run your model through BehaviorSpace instead of just step-by-step using a workspace as you've done in your sample code.
A good example of this might be the Main.scala object, which extracts some experiment settings from the expected command-line arguments, and then uses them with the lab.run() method to run the BehaviorSpace experiment and generate the output. That's Scala code and not Java, but hopefully it isn't too hard to translate. You'd similarly have to setup an org.nlogo.nvm.LabInterface.Settings instance and pass that off to a HeadlessWorkspace.newLab.run() to get things going.

Google Vision API JSON Response in English only

Its been so much of time exploring the Google vision API, I am trying to get the Vision API Response in English Language only , below is my request object to API which has language hints :
{
"requests": [
{
"features": [
{
"type": "IMAGE_PROPERTIES"
},
{
"type": "LANDMARK_DETECTION"
},
{
"type": "LABEL_DETECTION"
},
{
"type": "WEB_DETECTION"
},
{
"type": "FACE_DETECTION"
},
{
"type": "SAFE_SEARCH_DETECTION"
},
{
"type": "TEXT_DETECTION"
},
{
"type": "LOGO_DETECTION"
}
],
"image": {
"source": {
"imageUri": "https://images.dreamstream.com/prodds/prddsimg/OM_pasteIt22_12_2017_2_34_7806303.jpeg"
}
},
"imageContext": {
"languageHints": [
"en"
]
}
}
]
}
Even this request object not getting correct response(multiple languages) from Vision API ..
if there is any steps is there to get response in English only please let me know, as of now response contains multiple languages like below :
{
"url": "https://www.tummyummi.com/food/menu-aryaas-restaurant",
"pageTitle": "Aryaas India Restaurant - مطعم ارياس لبهند - TummYummi Restaurants",
"fullMatchingImages": [
{
"url": "https://www.tummyummi.com/food/upload/1509868727-Curd-Vada.jpg"
}
]
},
If I'm understanding correctly, the Vision Api is looking at your image, and determined that it has seen a similar image at https://www.tummyummi.com/food/menu-aryaas-restaurant.
The title of this website is Aryaas India Restaurant - مطعم ارياس لبهند - TummYummi Restaurants.
It is not a bug that this non-english text is being sent to you, because you asked the Api to use WEB_DETECTION.
It found a website that has that image, and gave you a link to it and its title.
From the docs, the ImageContext parameter languageHints allows you to set the expected language for text in the image, and will return an error if any other language is detected:
Text detection returns an error if one or more of the specified languages is not one of the supported languages.
It's important to note that this language setting is only affecting text detection.
If you want the text detection to only return english elements, but not error out if it detects anything else, then that document recommends the following:
For languages based on the Latin alphabet, setting languageHints is not needed. In rare cases, when the language of the text in the image is known, setting a hint will help get better results (although it will be a significant hindrance if the hint is wrong)
Instead, to filter out any text that is not english, you would instead look at the TextAnnotation's locale field, and filter out anything that isn't en on the client side.
As far as detecting the language of the title of the website during WEB_DETECTION is concerned, I think that is out of scope of the Google vision api, but you could try using the detecting lanuages feature of the cloud translation api.
Thanks for the useful answer #dustinroepsch, rather than relying on cloud translation api , we can go for regex because the only feature which is having non-english texts is WEB_DETECTION , sometimes it may vary.
In WEB_DETECTION , few objects like pagesWithMatchingImages and webEntities may have non-english texts . After Parsing JSON , we can use following regex pattern to remove non-english texts.
String regex = "[a-z,A-Z,0-9,($&+,:;=?##|'<>.^*()%!-)\\s]";

JVM crashes while implementing Python-Boilerpipe in Flask app

Im writing a flask app using boilerpipe to extract content.Initially i wrote the boilerpipe extract as script to extract website content but when i try to integrate with my api JVM crashes when executing boilerpipe extractor . This is the error i get https://github.com/misja/python-boilerpipe/issues/17
i have also raised a issue in github
from boilerpipe.extract import Extractor
import unicodedata
class ExtractingContent:
#classmethod
def processingContent(self,sourceUrl,extractorType="DefaultExtractor"):
extractor = Extractor(extractor=extractorType, url=sourceUrl)
extractedText = extractor.getText()
if extractedText:
toNormalString = unicodedata.normalize('NFKD',extractedText).encode('ascii','ignore')
json_data = json.loads({"content": toNormalString, "url": sourceUrl , "status": "success", "publisher_id": "XXXXX", "content_count": str(len(toNormalString)) })
return json_data
else:
json_data = json.dumps({"response": {"message": "No data found", "url": sourceUrl , "status": "success", "content_count": "empty" }})
return json.loads(json_data)
This is the above script im trying to integrate in Flask api which use flask-restful,sqlachemy,psql . I also updated my java but that didn't fix the issue.Java version
java version "1.7.0_45"
javac 1.7.0_45
Any help would be appreciated
Thanks
(copy of what I wrote in https://github.com/misja/python-boilerpipe/issues/17)
OK, I've reproduced the bug : the thread that calls the JVM is not attached to it, therefore the calls to JVM internals fail.
The bug comes from boilerpipe (see below).
First, monkey patching : in the code you posted on stackoverflow, you just have to add the following code before the creation of the extractor :
class ExtractingContent:
#classmethod
def processingContent(self,sourceUrl,extractorType="DefaultExtractor"):
print "State=", jpype.isThreadAttachedToJVM()
if not jpype.isThreadAttachedToJVM():
print "Needs to attach..."
jpype.attachThreadToJVM()
print "Check Attached=", jpype.isThreadAttachedToJVM()
extractor = Extractor(extractor=extractorType, url=sourceUrl)
About boilerpipe: the check if threading.activeCount() > 1 in boilerpipe/extractor/__init__.py, line 50, is wrong.
The calling thread must always be attached to the JVM, even if there is only one.

Categories