Hello I was creating a Minecraft plugin and was watching a youtube tutorial and explanation on how to store and reclaim data to a yml file this plugin requires the UUID of a player and stores it along with a chunk from the game and the person in the video was using it in an itemstack array Ias apposed to a UUID sting I tried to convert the code to work with a string but it only gave me more errors and was wondering on how to do it so it does not cover tot an array when it is put back into the hashmap
public void restoreClaimedChunks() {
this.getConfig().getConfigurationSection("data").getKeys(false).forEach(key ->{
#SuppressWarnings("unchecked")
UUID[] content = ((List<UUID>) Objects.requireNonNull(this.getConfig().get("data." + key))).toArray(new UUID[0]);
chunks.put(key, content);
});
}
the error I receive when I do this is
incompatible types: java.util.UUID[] cannot be converted to java.util.UUID
and the yml file looks like
ZeradentSMP: zoren3105.zeradentsmp.ZeradentSMP
name: ZeradentSMP
version: 1.0
author: Zoren
Since UUID doesn't implement ConfigurationSerializable the UUIDs in the YAML file are stored as Strings which means a casting error would occur when you try to cast a list of Strings to a list of UUIDs. Try instead getting a list of Strings from the YAML file (either by casting to List<String> or by using ConfigurationSection.getStringList()) and then iterating through to convert each one to a UUID with UUID.fromString().
EDIT
If the error you are receiving is a casting error between UUID[] and UUID then it appears the only thing that could cause that is if the type of the key in the chunks map is UUID and you are trying to put a UUID[] into it.
You can either change the type of the key in the map or don't get a list of UUIDs from the YAML file.
Related
I have a scenario where in we get machine code from one machine that needs to be sent to another but by converting it to string which other machine understands following are the scenarios
**if code is 'AGO PRF' then convert to 'AGO.P'
if code is 'HUSQ A' then convert to 'HUSQ.A'
if code is 'AIK B' then convert to 'AIK.B'
if code is 'ALUS WS' then convert to 'ALUS.WS'
if code is 'VST WSA' then convert to 'VST.WSA'
if code is 'SAB CL' then convert to 'SAB.CL'
if code is 'SPR WSB' then convert to 'NSPR.WSB'
if code is 'AXS PRD CL' then change it to 'AXS.PCL'
if code is 'CTEST RT' then convert to 'CTEST.R'
if code is 'ALUS U' then convert to 'ALUS.U'
if code is 'SFUN WI' then convert to 'SFUN.WI'
if code is 'RQI RT WI' then convert to 'RQI.RTWI'
if code is 'ECA WS WI' then change it to 'ECA.WSWI'.**
I used a MAP to fed in these values as keys and give out the output. but I want to know if there can be more generic solution to this
If there exists neither a rule nor a regularity of the String replacement (I find none), then you need either a mapping table stored in the DB or a static Map<String, String> of these constants:
I recommend the Map in case of a small number of these and they would not be changed often.
I recommend reading from the DB in case of a larger number of these. This also allows you to change the mapping on run with no need to build and redeploy the entire application.
In terms of the data structure, the dictionary-based would be the best way to go - Map<String, String>. It doesn't allow you to store duplicated keys and is simple to use for the transformation:
List<String> listOfStringsToBeReplaced = loadFromSomewhere();
Map<String, String> map = loadFromDb();
List<String> listWithReplacedStrnigs = listOfStringsToBeReplaced.stream()
.map(string -> map.getOrDefault(string, string))
.collect(Collectors.toList());
I use Map::getOrDefault to either replace the value or keep it as is if no mapping is found.
I am using ng-file-upload AngularJs API to upload multiple files to server.But this is the traditional way to do it.But my requirement is that i dont need to store files in a server as it is.I have a REST end point that responsible for store user input data to DB.Along with the REST request i pass the file Array object with other forms values.When data comes to REST end point it access each attributes and store data.When it tried to read File Array obj i can not read the file content for each file.
Sample File Upload Code
jsfiddle
Note that i just want to pass only $scope.files along with the REST request.Please let me know how can i read file content values from server side reading file Array in Java.If you guys know any better way to do this please share your ideas.
REST Service Code Snippet
#POST
#Path("/manual")
#Produces(MediaType.APPLICATION_JSON)
public boolean insertResults(testVO testResult) {
for(Object o:testVO.getFiles()){
LinkedHashMap<String, String> l=(LinkedHashMap<String, String>) o;
System.out.println(l.get("result"));
}
}
Note: testVO.getFiles() type is Object[] array.
In my preceding code i convert object into LinkedHashMap and access the necessary fields like size,type,etc.But my requirement is that how can i get the content belong to that file.
It's a questions about the train of thought, so please don't let me to use a third-party library to deal this.
Recently, I took a job interview, There's a questions like below:
there is a huge JSON file, structure like a Database:
{
"tableName1 ":[
{"t1field1":"value1"},
{"t1field2":"value2"},
...
{"t1fieldN":"valueN"}
],
"tableName2 ":[
{"t2field1":"value1"},
{"t2field2":"value2"},
....
{"t2fieldN":"valueN"}
],
.......
.......
"tableNameN ":[
{"tNfield1":"value1"},
{"tNfield2":"value2"},
....
{"tNfieldN":"valueN"}
]
}
And the requirements is:
find some special child-node by given child-node' name and update it's field's value then save it to a new JSON file.
count the number of given field's name and value.
when it's a normal size JSON file, I wrote a Utility class to load the JSON file from local and parse it to JSON Object. Then I wrote two methods to deal the two requirements:
void upDateAndSaveJson(JSONObject json, String nodeName,
Map<String, Object> map, Map<String, Object> updateMap,
String outPath) {
//map saved target child-node's conditions
//updateMap saved update conditions
// first find the target child-node and update it finally save it
// ...code ...
}
int getCount(JSONObject json, Map<String, Object> map) {
//map saved field target field/value
// ...code...
}
But the interviewer let me thinking about the situation when the JSON file is very huge, then modify my code and how to make it more effective.
My idea is write a tool to split the JSON file first. Because finally I need take the JSON Object to invoke previous two methods, so before I split the huge JSON file I know the parameters of the two methods: a Map(saved target child-node's conditions/or field target field/value), nodeName(child-node name)
so when I load the JSON file I compare the inputstream String with the taget nodeName, and then start to count the number of object the child-node, if rule is 100, then when it have 100 objects, I split the child-node to a new smaller JSON file and remove it in source JSON file.
Like below:
while((line = reader.readLine()) != null){
for (String nodeName : nodeNames) {
//check if its' the target node
if (line.indexOf(nodeName) != -1) {
//count the target child-node's object
//and then split to smaller JSON file
}
}
}
After that I can use multiple thread to load the smaller JSON file previous created and invoke the two method to process the JSON Object.
It's a questions about the train of thought, so please don't tell me you can use a third-party library to deal this problem.
So if my though feasible? or is there some other idea you guys have, please share it.
Thanks.
I have a Java script, that will get the BLOB data from the database and then email this file to a specific email address. My problem is, that I have to use some framework functions (I can make DB calls only through these) and I think it's not handling BLOB datatypes... All I can get is the string representation of the result, this is the log line result of the code (framework call):
String s = String.valueOf(result.get(j).getValue("BLOB_DATA"));
System.out.println(s);
Log result:
<binary data> 50 KB
So this is the data I have to convert SOMEHOW into a valid pdf file, but right now I'm stuck...
Is it even possible to convert it into a valid byte[]? I tried it several ways, but all I get is invalid files... :(
I have some torrent file with list of announce urls, f.e. this is the part of it:
announce-listll68:http://iptorrents.com:2790/b6d18a815ab4421a86de672d6833369d/announceel67:http://iptorrents.me:2710/b6d18a815ab4421a86de672d6833369d/announceel67:http://iptorrents.ru:6969/b6d18a815ab4421a86de672d6833369d/announceee
So here is one array with key «announce-list» which contains three elements (bencoded data, http://en.wikipedia.org/wiki/Bencode).
So I am using BDecoder.java class from Aeltis to decode it. While parsing I am getting the next values of Map:
{created by=[B#141d683, announce=[B#16a55fa, encoding=[B#32c41a, announce-list=[[[B#e89b94], [[B#13e205f], [[B#1bf73fa]], comment=[B#5740bb, creation date=1310060702, info={pieces=[B#5ac072, name=[B#109a4c, length=34209795, piece length=65536, private=1}}
So announce list filled with some hashes. So how can I convert it to normal string (such as «http://iptorrents.com:2790/b6d18a815ab4421a86de672d6833369d/announce»)? Or it's some algorithm issue in BDecoder.java?
This is the method of upper class to decode data: http://pastebin.com/HimqF0ye
The object returned in your case is a generic Map, with no type defined. So I suppose when you try to print the values, only the addresses are printed. Try casting the values of the Map to String and that should do the trick.
Best regards,
Baptiste