Infinispan default cache.get does not work - java

I loaded default package from infinispan.org version 6.0.0.
Ran standalone.sh on CentOS release 6.3 and trying to access to it from java client using hotrod.
Scenario:
Put key value - there is log on server where said that element was added sucessfully
Get by key - null returned, there is log on server where said that element was not found by this key
GetBulk - returned element
Here is code that I use:
RemoteCacheManager cacheContainer = new RemoteCacheManager(new ConfigurationBuilder().addServer().host(SERVER_HOST).port(SERVER_PORT).build());
RemoteCache<Integer, Integer> cache = cacheContainer.getCache();
Integer result = cache.put(key, value);
Integer value = cache.get(key);
Map<String, String> bulk = cache.getBulk();
The same scenatio works perfectly if infinispan installed on windows.
java version "1.7.0_45" (both, client and server).
The same fault behavior occures no matter remote calls or local ones.
If we call getBulk or keySet - returns data that we expect.
Checked sended and recieved keys - they are equal.
Checked jvm version, tried on different and equal ones.
Tried using different keys and values: String, Integer, Custom objects.
What am I doing wrong?

Related

How to obtain PayloadSize from Genicam reference implementation?

I'm trying to access a GigE camera using the Genicam reference implementation by trying to look at the online resources and existing existing resources (aravis, harvesters) and follow the GenTL standard using the SNFC which every Genicam compatible camera supports. The producer I'm currently using is from Basler since the camera I have here is from them.
/* I wrapped the Genicam classes with my own. Here are the relevant parts */
tl = new GenicamTransportlayer("/opt/pylon/lib/gentlproducer/gtl/ProducerGEV.cti");
if0 = tl.getFirstInterface();
dev0 = if0.getFirstDevice();
ds = dev0.getFirstDataStream();
I'm able to connect to the System, Interface, Device, DataStream, connect the nodemaps and am now trying to set up the buffers for acquisition. To do so I need to get the maximum payload size from the camera. The GenTL standard document standard says, I need to query it from the DataStream module using
boolean definesPayloadSize = ds.getInfoBool8(StreamInfoCommand.STREAM_INFO_DEFINES_PAYLOADSIZE);
which gives me 0 or false. The producer MAY provide a PayloadSize feature which can be queried using
ds.getInfoSizet(StreamInfoCommand.STREAM_INFO_PAYLOAD_SIZE);
which is obviously also 0 and with being a may I cannot rely on it. The standard further tells me if both fail, I need to inquire via the remote devices NodeMap to read the PayloadSize:
long payloadSizeFromRemoteMap = dev0.remoteMap.getIntegerNode("PayloadSize").getValue();
This gives me 0 too. The standard goes on that if the producer does not implement an interface standard (whatever this means?), the required payload size has to be queried via the producer using the StreamInfo Commands which also fails (GenTL maps the constant STREAM_INFO_PAYLOAD_SIZE to 7 which produces a BufferTooSmallException on the System port).
At this point I'm confused on what to do. Most of my nodes are locked (I can overwrite TLParamsLocked but still cannot change parameters, eg, execute a load of the default parameter set) so I cannot set Width/Height/ImageFormat to infer the PayloadSize:
/* Trying to set a default configuration fails */
IEnumeration userSetSelector = dev0.remoteMap.getEnumerationNode("UserSetSelector");
log.debug("Loading Feature set: " + userSetSelector.getEntries().get(0).getName());
// Prints: Loading Feature set: EnumEntry_UserSetSelector_Default
userSetSelector.setValue("Default");
dev0.remoteMap.getCommandNode("UserSetLoad").execute();
// AccessException: Node is not writable. : AccessException thrown in node 'UserSetLoad' while calling 'UserSetLoad.Execute()' - Node is not writable.
Without knowing the size of the buffers I cannot continue. How can I infer the PayloadSize to set them up?

Cache data with lua script, but getting RedisCommandExecutionException with accessing a non local key in a cluster node

I have a map data to cache in redis cluster using lua script in springboot project, such as:
{
"demoKey:{1}":"value1",
"demoKey:{2}":"value2",
"demoKey:{3}":"value3"
}
lua script like this:
local addMap = cjson.decode(ARGV[1]);
for fieldKey, fieldValue in pairs(addMap) do
redis.call("SET", fieldKey, fieldValue);
end
JAVA CODE:
final DefaultRedisScript<?> redisScript = new DefaultRedisScript<>();
redisScript.setScriptSource(LUA_SCRIPT);
redisClient.execute(redisScript, new ArrayList<>(), JsonUtil.toString(addMap));
I have set the hash tag in redis key, but I still get the exception while running the program.
org.springframework.data.redis.RedisSystemException: Error in execution; nested exception is io.lettuce.core.RedisCommandExecutionException: ERR Error running script (call to f_7cce57ffe5b0b94fa78680955c993e808ffa5f16):
#user_script:7: #user_script: 7: Lua script attempted to access a non local key in a cluster node
Appreciating for any help.
LUA script does not allow us to access keys that are not on the same Redis machine where the LUA script is running.
One solution is to tag the KEYS so that it will land on the correct Redis node, tagging a key means all keys would belong to the same Redis instance.
It seems you have done tagging using {1}, {2} and {3} but all of these potentially could hash to different Redis instance so leading to the error. Your tag should be the same, for example, you can use userId, recordId etc as a tag.
For your example, if you can change your map to below it should be working fine.
{
"demoKey_1:{demoKey}":"value1",
"demoKey_2:{demoKey}":"value2",
"demoKey_3:{demoKey}":"value3"
}

Skip a same combination in Stream API

I have a List filteredList where and I am streaming over each element and using for each to set some items
filteredList.parallelStream().forEach(s->{
ARChaic option=new ARChaic();
option.setCpu(s.getNoOfCPU());
option.setMem(s.getMemory());
option.setStorage(s.getStorage());
option.setOperatingSystem(s.getOperationSystem());
ARChaic newOption= providerDes.getLatest(option); //this is a external service
s.setCloudMemory(newOption.getMem());
s.setCloudCPU(newOption.getCpu());
s.setCloudStorage(newOption.getStorage());
s.setCloudOS(newOption.getOperatingSystem());
});
The goal is to call this service but if the above option is same then take the old one to call.
For Example- if two server have same memory,cpu,os and storage then it will call getLatest only once.
Suppose at position 1 and 7 in filteredList I have same config then I shouldn't be calling getLatest again at 7 since I already have previous option value which I will set it 7(Working done after service call)
You can add equals and hashcode to your Server class to denote when two Server instances are equal. From your description, you will have to take into account and compare the memory, cpu, os, and storage.
After this, you can map the filteredList as a Map<Server, List<Server>> to get unique servers as the key and the value will have all the repeated server instances. You will call the service once for each key in the map, but after you get the result, you can update all the server instances that are the value of the map with the result.
Map<Server, List<Server>> uniqueServers = filteredList.stream()
.collect(Collectors.groupingBy(Function.identity(), Collectors.mapping(Function.identity(),
Collectors.toList())));
uniqueServers.entrySet().parallelStream().forEach(entry -> {
Server currentServer = entry.getKey(); //Current server
ARChaic option=new ARChaic();
option.setCpu(currentServer.getNoOfCPU());
option.setMem(currentServer.getMemory());
option.setStorage(currentServer.getStorage());
option.setOperatingSystem(currentServer.getOperationSystem());
ARChaic newOption= providerDes.getLatest(option); //this is a external service
//update all servers with the result.
entry.getValue().forEach(server -> {
server.setCloudMemory(newOption.getMem());
server.setCloudCPU(newOption.getCpu());
server.setCloudStorage(newOption.getStorage());
server.setCloudOS(newOption.getOperatingSystem());
});
});

Digest calculation of signature fails on Windows, Java 1.8

The job to do:
I have got a signed SOAP request and I have to check if the signature is okay. The timestamp of the SOAP message is not of interest.
My solution so far:
I made a child class of org.apache.wss4j.dom.engine.WSSecurityEngine where in the method processSecurityHeader the check of TimestampProcessor is taken out of concern:
public class SignatureSecurityEngine extends WSSecurityEngine {
...
public WSHandlerResult processSecurityHeader(Element securityHeader, RequestData requestData) throws org.apache.wss4j.common.ext.WSSecurityException {
...
Processor p = cfg.getProcessor(el);
if (p != null) {
try {
results = p.handleToken((Element) node, requestData, wsDocInfo);
} catch (Exception e){
if (p instanceof TimestampProcessor) {
// it's okay if timestamp is too old
} else {
throw e;
}
}
}
...
In fact it's just a copy of WSSecurityEngine with the try/catch added for timestamp processor.
I older versions of wss4j and xmlsec this worked fine.
After a version upgrade of the components, I got the following strange issue:
The calculation of the signature digest fails in org.apache.jcp.xml.dsig.internal.dom.DOMReference.validate(...) if:
The programm runs on Windows (JRE)
I debug on Windows (JDK)
I debug on Linux (JDK)
BUT:
If the programm runs on Linux (JRE), everything works fine !
For both (Windows/Linux), the configuration is:
wss4j 2.1.9
xmlsec 2.0.8
Java version: 1.8.0_131 (build 1.8.0_131-b11)
Observation:
It seems that there remains a standard value ( 2jmj7l5rSw0yVb/vlWAYkK/YBwk= ) for the calculated digest.
Any idea?
Additional facts (2017-06-13):
After Maartens remark I (re)wrote some of the classes (in fact copy & paste) and added some System.out.println to have "debug information" at runtime. Really an odd old style and ugly thing...
But the result was quite interesting!
The stream for MessageDigest was never set. So this explains the 2jmj7l5rSw0yVb/vlWAYkK/YBwk= which is the digest for an empty string with SHA-1 (thanks Maarten!)
I managed then to fix - so the stream is now set in my copied "debug"-classes.
Result: If I debug now with my IDE, the calculation functions!
But: If I run in runtime the check fails :-((( Reason: The calculated value is not equal to the expected.
Further observations showed: ev. the wrong calculation depends on the length of the data the digest has to be calculated for (!?!?!?).
Let's have a look at my log:
*** Digest for Timestamp
VGDOMReference.validate -> transform:
Expected digest: LxfIdEUVsbyLaevptByfIf2L0PA=
Actual digest: LxfIdEUVsbyLaevptByfIf2L0PA=
Reference[#Timestamp-31b20235-a1e2-4ed0-9658-67611572108e]
*** Digest for Body
Expected digest: Yv+zLpkog+xzAdMlSjoIaZntZLs=
Actual digest: sj2Gb0GEyjWuxoCmnBzDY266aG8=
Reference[#Body-c9761a98-46bb-4175-8f8b-bfa9b5c75509]
As you can see the calculation for timestamp is correct. But the one for the body is wrong.
Perhaps some stream buffer that is not entirely written?
After some tests it turned out that there was an additional encoding problem... :-(((
The original signed files are in UTF-8, but as Windows uses ISO-xyz_whatever it did not match.
First fix is to put the encoding of JVM in the script that is calling the jar, so:
java -Dfile.encoding=UTF8 <programm>.jar

default values in the HashTable extended by the Properties class

On a windows 2012 R2 platform I noticed that winver returns 6.3, but System.getProperty("os.version") returns 6.2 ; I am looking at this source code :
class [More ...] Properties extends Hashtable<Object,Object> {
protected Properties defaults;
public String [More ...] getProperty(String key) {
Object oval = super.get(key);
String sval = (oval instanceof String) ? (String)oval : null;
return ((sval == null) && (defaults != null)) ? defaults.getProperty(key) : sval;
}
}
I am suspecting the value of os.version is obtained from here. Is my suspicion right ?
Object oval = super.get(key);
What would be the contents of the HashTable and how is this populated ? ( I have not loaded the java source code as a project into my eclipse work-bench)
The system property os.version is added by the JVM itself thanks to the static native method initProperties(Properties props) as you can see here at the line 527. This method is called while initializing the System class which is done by the method initializeSystemClass().
In other words it means that the native code of your JVM is not able to recognize your os version, you should upgrade your JDK to fix this issue.
Here is a blog post where the blogger had the same issue with an old version of Java, upgrading it was enough to fix the issue.
This problem was seen before java 6u38. From 6u38, this issue is solved. The security baselines for the Java Runtime Environment (JRE) at the time of the release of JDK 6u38 are specified.
First using the EPM Java JDK version.
As you see it is generating incorrect information, now using a later JDK 7 release.
So this highlights it is the down to the version of Java which is causing the issue.
Resource Link:
EPM 11.1.2.4 - Java versions and why Windows Server 2012 is not
correctly recognised
I am suspecting the value of os.version is obtained from Object oval =
super.get(key);. Is my suspicion right ?
Answer:
You are right. But here is some mechanism
First mechanism:
System.getProperty("os.version"); //which is called the OS version.
The getProperty method returns a string containing the value of the property. If the property does not exist, this version of getProperty returns null.
Second Mechanism:
System.getProperty("os.version", "Windows Server 2012 R2(6.3)");
getProperty requires two String arguments: the first argument is the key to look up and the second argument is a default value to return if the key cannot be found or if it has no value. For example, the following invocation of getProperty looks up the System property called os.version. This is not a valid system property, so instead of returning null, this method returns the default value provided as a second argument: "Windows Server 2012 R2(6.3)"
The last method provided by the System class to access property values is the getProperties method, which returns a Properties object. This object contains a complete set of system property definitions.
What would be the contents of the HashTable and how is this populated
?
Answer:
Properties extends java.util.Hashtable. Some of the methods inherited from Hashtable support the following actions:
testing to see if a particular key or value is in the Properties
object,
getting the current number of key/value pairs,
removing a key and its value,
adding a key/value pair to the Properties list,
enumerating over the values or the keys,
retrieving a value by its key, and
finding out if the Properties object is empty.
You can learn more from here about System Properties and Properties
Property related info is read and can be changed through this java class: PropertiesTest.java
Note and Recommendation from Oracle
Warning: Changing system properties is potentially dangerous and
should be done with discretion. Many system properties are not reread
after start-up and are there for informational purposes. Changing some
properties may have unexpected side-effects.
Note: Some of the methods described above are defined in Hashtable,
and thus accept key and value argument types other than String. Always
use Strings for keys and values, even if the method allows other
types. Also do not invoke Hashtable.set or Hastable.setAll on
Properties objects; always use Properties.setProperty.

Categories