How to explain duplicate entries in Java Hashmap? - java

This is under multi-threading
I have read topics about HashMap and cannot find relevant questions/answers
The code:
private Map<String, String> eventsForThisCategory;
...
Map<String, String> getEventsForThisCategory() {
if (eventsForThisCategory == null) {
Collection<INotificationEventsObj> notificationEvents = notificationEventsIndex.getCategoryNotificationEvents(getCategoryId());
eventsForThisCategory = new HashMap<String, String>();
for(INotificationEventsObj notificationEvent : notificationEvents) {
eventsForThisCategory.put(notificationEvent.getNotificationEventID(), notificationEvent.getNotificationEventName());
}
logger.debug("eventsForThisCategory is {}", eventsForThisCategory);
}
return eventsForThisCategory;
}
The output:
app_debug.9.log.gz:07 Apr 2016 13:47:06,661 DEBUG [WirePushNotificationSyncHandler::WirePushNotification.Worker-1] - eventsForThisCategory is {FX_WIRE_DATA=Key Economic Data, ALL_FX_WIRE=All, ALL_FX_WIRE=All, FX_WIRE_HEADLINES=Critical Headlines}
How is it possible?

I am quite sure that your map won't have two equal keys at the same time. What you see is an effect of modifying the map while iterating iver it (in toString()). When the second "ALL_FX_WIRE" is written, the first one won't be present in the map any more.
You already know that HashMap is not threadsafe. Plus eventsForThisCategory can get modified by another thread while eventsForThisCategory.toString() is running. So this has to be expected.
Make sure eventsForThisCategory is not modified by multiple threads at the same time (or switch to ConcurrentHashMap), and make sure it is not modified while toString() is running (it is called when you create the debug output).

HashMap is not threadSafety in multi-threading.
you can try to use Collections.synchronizedMap() to wrap you hashMap instance
or use the ConcurrentHashMap maybe better

Related

Get Objfrom List<Obj> given an id without having to iterate through the whole List

I did a bit of investigating before posting this and found that the best way to find the data i need without having to iterate through a whole List is with a HashMap. Now I've never had to use HashMaps before and it's complicating me a lot.
Given this Client class
public class Client {
private String nroClient;
private String typeDoc;}
I gotta get a typeDoc given an unique nroClient
I've gotten this far
private String getTypeDoc(List<Client> clients, String nroClient) {
Map <String, Client> map = new HashMap<String, Client>();
for (Client client : clients)
{
map.put(client.getNroClient(), client);
}
}
It just doesn't seem right at all, and I have no idea how to advance. I'd really appreciate any input. Sorry if this has been asked before, i really tried to find a solution before posting. Thanks
You've basically got it, but building the map is obviously as slow (even slower, in fact) vs. just looping the list.
Given a j.u.List instance, you CANNOT answer the question 'get me the class in this list with ID x, and do it fast'.
The solution is to remove the list entirely and have that be a map.
If you ALSO need list-like aspects (For example, you need to be able to answer the question 'get me the 18th client'), you can either use LinkedHashMap which remembers the order in which you added things, but it still doesn't have something like a .get(18) method. If need be you can have a class to represent the notion of 'Clients', internally it has BOTH a list and a map, it has an add method that adds your client to both data structures, and now you can answer both 'get me the 18th client' and 'get me the client with this ID' quickly.
Are you trying to return the match?:
private String getTypeDoc(List<Client> clients, String nroClient)
{
String typeDocFound = null;
for (Client client : clients)
{
if(client.getNroClient().equals(nroClient)
{
typeDocFound = client.getTypeDoc();
break;
}
}
return typeDocFound;
}

Java 8 Stream Collectors for Maps, misleading groupingBy error

I'm pretty confused by the differences in these two methods, but I'm sure I'm doing something wrong.
I have a working example, and a non-working example below. In the working example, I'm assigning the variable tester as a "Map", In the non-working example I'm trying to assign it to Map. In the second example, the error is shown here:
I am failing to see the connection between the types for the tester variable, and the type of the myMap variable.
//Working Example (or at least it compiles)
public String execute(Map<String, String> hostProps) {
Map tester = usages.stream()
.map(usage -> prepareUsageForOutput(usage, rateCard.get()))
.collect(Collectors.groupingBy(myMap -> myMap.get("meterName"))); // This case compiles
}
//Compiler Error, just by adding the Map types on line 13, it breaks line 18
public String execute(Map<String, String> hostProps) {
Map<String, Object> tester = usages.stream()
.map(usage -> prepareUsageForOutput(usage, rateCard.get()))
.collect(Collectors.groupingBy(myMap -> myMap.get("meterName"))); // In this case, the .get() method call can't resolve .get. It doesn't recognize that myMap is a Map.
}
//This is the method that gets called in the above methods
static Map<String, Object> prepareUsageForOutput(Usage usage, RateCard rateCard ){
}
Update 1 - Wrong Collector Method
While Eran posted the explanation of my original issue, it revealed that I should have been using Collectors.toMap instead of Collectors.groupBy, because my goal was to return a map entry for each map returned from "prepare", where the key was a string, and the value was the map returned from "prepare". groupBy will always return a list of results as the map value.
I also removed the filter statements, as they were irrelevant and significantly complicated the example.
This was the final result that met my goal:
Map<String, Object> tester = usages.stream()
.map(usage -> prepareUsageForOutput(usage, rateCard.get()))
.collect(Collectors.toMap(myMap ->(String)myMap.get("meterName"), Function.identity()));
Since prepareUsageForOutput returns a Map<String, Object>, this means the Collectors.groupingBy() collector operates on a Stream<Map<String, Object>>, which means it creates a Map<Object,List<Map<String, Object>>>, not a Map<String, Object>.
This answer doesn't explain the error your compiler gave, but several times I encountered misleading compilation errors where Streams are involved, so I suggest changing the output type to Map<Object,List<Map<String, Object>>> and see if the error goes away.

How to store values from Trident/Storm in a List (using the Java API)

I'm trying to create a few Unit Tests to verify that certain parts of my Trident topology are doing what they are supposed to.
I'd like to be able to retrieve all the values resulting after running the topology and put them in a List so I can "see" them and check conditions on them.
FeederBatchSpout feederSpout = new FeederBatchSpout("some_time_field", "foo_id");
TridentTopology topology = new TridentTopology();
topology.newStream("spout1", feederSpout)
.groupBy(new Fields("some_time_field", "foo_id"))
.aggregate(new Fields("foo_id"), new FooAggregator(),
new Fields("aggregated_foos"))
// Soo... how do I retrieve the "aggregated_foos" from here?
I am running the topology as a TrackedTopology (got the code from another S.O. question, thank you #brianghig for asking it and #Thomas Kielbus for the reply)
This is how I "launch" the topology and how I feed sample values into it:
TrackedTopology tracked = Testing.mkTrackedTopology(cluster, topology.build());
cluster.submitTopology("unit_tests", config, tracked.getTopology());
feederSpout.feed(new Values(MyUtils.makeSampleFoo(1));
feederSpout.feed(new Values(MyUtils.makeSampleFoo(2));
When I do this, I can see in the log messages that the topology is running correctly, and that the values are calculated properly, but I'd like to "fish" the results out into a List (or any structure, at this point) so I can actually put some Asserts in my tests.
I've been trying [a s**ton] of different approaches, but none of them work.
The latest idea was adding a bolt after the aggregation so it would "persist" my values into a list:
Below you'll see the class that tries to go through all the tuples emitted by the aggregate and would put them in a list that I had previously initialized:
class FieldFetcherStateUpdater extends BaseStateUpdater<FieldFetcherState> {
final List<AggregatedFoo> results;
public FieldFetcherStateUpdater(List<AggregatedFoo> results) {
this.results = results;
}
#Override
public void updateState(FieldFetcherState state, List<TridentTuple> tuples,
TridentCollector collector) {
for (TridentTuple tuple : tuples) {
results.add((AggregatedFoo) tuple.getValue(0));
}
}
}
So now the code would look like:
// ...
List<AggregatedFoo> results = new ArrayList();
topology.newStream("spout1", feederSpout)
.groupBy(new Fields("some_time_field", "foo_id"))
.aggregate(new Fields("foo_id"), new FooAggregator(),
new Fields("aggregated_foos"))
.partitionPersist(new FieldFetcherFactory(),
new Fields("aggregated_foos"),
new FieldFetcherStateUpdater(results));
LOGGER.info("Done. Checkpoint results={}", results);
But nothing... The logs show Done. Checkpoint results=[] (empty list)
Is there a way to get that? I imagine it must be doable, but I haven't been able to figure out a way...
Any hint or link to pages or anything of the like will be appreciated. Thank you in advance.
You need to use a static member variable result. If you have multiple parallel tasks running (ie, parallelism_hint > 1) you also need to synchronize the write access to result.
In your case, result will be empty, because Storm internally, creates a new instance of your bolt (including a new instance of ArrayList). Using a static variable ensures, that you get access to the correct object (as there will be only one over all instances of your bolt).

How to get voters lists from all linked issues

I am trying to create a JIRA plugin that does the following:
For each issue, takes all linked issues which are linked by "duplicates" or "is duplicated by" (or other predefined link types).
For each such issue, get a list (not necessarily a List object) of the voters on that issue.
My problem is that the javadoc has little to no information. Following a tutorial, I currently have:
public class VotersCount extends AbstractJiraContextProvider {
#Override
public Map<String, Integer> getContextMap(User user, JiraHelper jiraHelper) {
Map<String, Integer> contextMap = new HashMap<>();
Issue currentIssue = (Issue) jiraHelper.getContextParams().get("issue");
// Issue[] linkedIssues = currentIssue.getLinkedIssuesBy(...); //Step 1 mock code
// Voter[] voters = linkedissues[3].getVoters(); //Step 2 mock code
int count = voters.length; //Pretend there is some calculation here
contextMap.put("votersCount", count);
return contextMap;
}
}
(and I use votersCount in the .vm file.)
However, I see no explanation in the javadocs for AbstractJiraContextProvider and getContextMap so I'm not even sure if it's the right approach.
In my own research I found the class ViewVoters which has the method Collection<UserBean> getVoters(), which is something I can work with, but I don't know how to obtain or construct such an object in a way which will interact with a given issue.
I am looking for a working code to replace my 2 lines of mock code.
1) Use one of the methods from IssueLinkService. Maybe getIssueLinks
2) issueVoterAccessor.getVoterUserkeys
Instances of IssueLinkService and IssueVoterAccessor should be injected as parameters to constructor of your VotersCount.
I solved it by using the following:
To get issues linked to Issue issue by specified link types:
LinkCollection linkCollection = ComponentAccessor.getIssueLinkManager().getLinkCollectionOverrideSecurity(issue);
Set<IssueLinkType> linkTypes = linkCollection.getLinkTypes();
// Perform operations on the set to get the issues you want.
for (IssueLinkType linkType : linkTypes) {
List<Issue> l1 = linkCollection.getOutwardIssues(linkType.getName());
List<Issue> l2 = linkCollection.getInwardIssues(linkType.getName());
}
To get all the voters on Issue issue:
ComponentAccessor.getVoteManager().getVoterUserkeys(issue);
I was later shown that one can extend CalculatedCFType and override getValueFromIssue which hands you the current issue as a parameter instead of the using
Issue currentIssue = (Issue) jiraHelper.getContextParams().get("issue");

ConcurrentModificationException when invoking putAll

I have difficulties in understanding the following error.
Suppose you have a class A in which I implement the following method:
Map<Double,Integer> get_friends(double user){
Map<Double,Integer> friends = user_to_user.row(user);
//friends.putAll(user_to_user.column(user));
return friends;}
Then in the main I do the following:
A obj = new A();
Map<Double,Integer> temp = obj.get_friends(6);
Well this works fine. However when I uncomment the follwing line in class A:
friends.putAll(user_to_user.column(user));
and I run again the program, it crashes and throws me the concurrentModificationException.
It is to be noted, that I am creating the Table user_to_user as follows:
private HashBasedTable<Double,Double,Integer> user_to_user;//
user_to_user = HashBasedTable.create();
What is further surprising is that when I interchange the way I am filling friends, I mean in that way:
Map<Double,Integer> friends = user_to_user.column(user);
friends.putAll(user_to_user.row(user));
Then everyting will work fine.
Any idea ?
The issue is that HashBasedTable is internally implemented as a Map<Double, Map<Double, Integer>>, and that the implementation of user_to_user.column(user) is iterating over the rows at the same time you're modifying the row associated with user.
One workable alternative would be to copy user_to_user.column(user) into a separate Map before putting it into the row.

Categories