I have two TreeMaps that I want to compare.
I currently have it written down like below but I feel like this could be written more efficiently. I tried looking in to comparators, but I don't think that's something I can use for my use-case.
The maps are Treemaps because the key must be case-insensitive.
public void theseRulesAreTheSame() {
List<String> failures = new ArrayList<>();
TreeMap<String, NSG> configNsgs = platformConfiguration.getAzure().nsgs();
configNsgs.forEach((name, nsg) -> {
assertThat(azureAdapter.doesNsgExistInAzure(name))
.as("Unable to find network security group " + name + " in Azure.").isTrue();
List<SecurityRulesItem> configSecurityRules = nsg.getSecurityRules();
TreeMap<String, Object> azureSecurityRules = azureAdapter
.getSecurityRulesForNsg(name);
assertThat(configSecurityRules.size())
.as("The nymber of security rules in Azure does not correspond to the number of security rules in the configuration!")
.isEqualTo(azureSecurityRules.size());
configSecurityRules.forEach(configSecurityRule -> {
SecurityRuleInner azureSecurityRule = (SecurityRuleInner) azureSecurityRules
.get(configSecurityRule.getRuleName());
logger.info(
"Checking security rule " + configSecurityRule.getRuleName()
+ " in network security group "
+ nsg.getName());
if (null == azureSecurityRule) {
logFailure(failures, null, configSecurityRule.getRuleName());
} else {
if (!azureSecurityRule.access().toString().equalsIgnoreCase(configSecurityRule.getAccess())) {
logFailure(failures, configSecurityRule.getAccess(), azureSecurityRule.access());
}
if (!azureSecurityRule.destinationAddressPrefix().equalsIgnoreCase(configSecurityRule.getDestinationAddressPrefix())) {
logFailure(failures, configSecurityRule.getDestinationAddressPrefix(), azureSecurityRule.destinationAddressPrefix());
}
if (!azureSecurityRule.destinationPortRange().equalsIgnoreCase(configSecurityRule.getDestinationPortRange())) {
logFailure(failures, configSecurityRule.getDestinationPortRange(), azureSecurityRule.destinationPortRange());
}
if (!azureSecurityRule.sourceAddressPrefix().equalsIgnoreCase(configSecurityRule.getSourceAddressPrefix())) {
logFailure(failures, configSecurityRule.getSourceAddressPrefix(), azureSecurityRule.sourceAddressPrefix());
}
if (!azureSecurityRule.sourcePortRange().equalsIgnoreCase(configSecurityRule.getSourcePortRange())) {
logFailure(failures, configSecurityRule.getSourcePortRange(), azureSecurityRule.sourcePortRange());
}
if (!azureSecurityRule.protocol().toString().equalsIgnoreCase(configSecurityRule.getProtocol())) {
logFailure(failures, configSecurityRule.getProtocol(), azureSecurityRule.protocol());
}
if (!azureSecurityRule.direction().toString().equalsIgnoreCase(configSecurityRule.getDirection())) {
logFailure(failures, configSecurityRule.getDirection(), azureSecurityRule.direction());
}
}
});
});
if (!failures.isEmpty()) {
Assertions.fail(
"Error(s) detected while comparing the network security groups between Azure and the config. Failures: "
+ failures);
}
}
Thanks in advance
If we have the two types AzureSecurityRule and ConfigSecurityRule we could make the comparison less verbose like this:
BiConsumer<AzureSecurityRule, ConfigSecurityRule> compareField(Function<AzureSecurityRule,String> f1, Function<ConfigSecurityRule> f2) {
return (az, cf) -> {
if !f1.apply(az).equalsIgnoreCase(f2.apply(cf)) {
logFailure(failure, f2.apply(cf), f1.apply(az));
}
}
}
...
List.of(
compareField(az -> az.access().toString(), cf -> cf.getAccess()),
compareField(az -> az.destinationAddressPrefix(), cf -> cf.getDestinationAddressPrefix()),
...
).forEach(cf -> cf.accept(azureSecurityRule, configSecurityRule));
Related
I have about 1000 files with a date in their name, i would like to sort them by this date in the filename and pick the latest one which has a date same or earlier than an argument.
I have wrote this:
Pattern PATTERN = Pattern.compile("^\\d{4}-\\d{2}-\\d{2}-file.csv");
try {
deviceFiles = Files.list(filesDir.toPath())
.filter(path -> PATTERN.matcher(path.getFileName().toString()).matches()
&& !getDate(path).isAfter(ARGUMENT_DATE))
.collect(Collectors.toList());
Arrays.sort(deviceFiles.toArray(), new FileNamePathDateComparator());
logger.info("All files found are " + deviceFiles.stream().map(stream -> stream.toAbsolutePath().toString()).collect(Collectors.joining("\n")));
if (deviceFiles.isEmpty())
throw new IllegalStateException("There were no device files found");
else {
String deviceFilePath = deviceFiles.get(deviceFiles.size() - 1).toAbsolutePath().toString();
logger.info("Found device file: " + deviceFilePath);
return deviceFilePath;
}
} catch (IOException e) {
throw new UncheckedIOException(e);
}
The getDate method:
private LocalDate getDate(Path path)
{
try {
String[] parts = path.getFileName().toString().split("-");
return LocalDate.of(Integer.parseInt(parts[0]), Integer.parseInt(parts[1]), Integer.parseInt(parts[2]));
} catch (NumberFormatException ex) {
throw new IllegalArgumentException("bye", ex);
}
}
The comparator:
class FileNamePathDateComparator implements Comparator{
#Override
public int compare(Object o1, Object o2)
{
return getDate((Path)o1).compareTo(getDate((Path)o2));
}
}
When i run this locally i see that the logger prints all the files correctly sorted, the comparator works just fine.
But on a kubernetes cluster the files are printed randomly, i dont understand this.
Fixed! I have put the comparator in the stream rather than in the final list, and it works fine.
If in the mean while someone can provide an explanation, i would appreciate.
deviceFiles = Files.list(filesDir.toPath())
.filter(path -> PATTERN.matcher(path.getFileName().toString()).matches()
&& !getDate(path).isAfter(executionDate))
.sorted(new FileNamePathDateComparator()::compare)
.collect(Collectors.toList());
I have changed all my multi-thread actions in my application to Akka a few weeks ago.
However, since it seems that I am starting to run out of Heap space (after a week or so).
By basically looking at all actors with
ActorSelection selection = getContext().actorSelection("/*");
the number of actors seems to increase all the time. After an hour of running I have more then 2200. They are called like:
akka://application/user/$Aic
akka://application/user/$Alb
akka://application/user/$Alc
akka://application/user/$Am
akka://application/user/$Amb
I also noticed that when opening websockets (and closing them) there are these:
akka://application/system/Materializers/StreamSupervisor-2/flow-21-0-unnamed
akka://application/system/Materializers/StreamSupervisor-2/flow-2-0-unnamed
akka://application/system/Materializers/StreamSupervisor-2/flow-27-0-unnamed
akka://application/system/Materializers/StreamSupervisor-2/flow-23-0-unnamed
Is there something specific that I need to do to close them and let them be cleaned?
I am not sure the memory issue is related, but the fact that there seem so many after an hour on the production server it could be.
[EDIT: added the code to analyse/count the actors]
public class RetrieveActors extends AbstractActor {
private String identifyId;
private List<String> list;
public RetrieveActors(String identifyId) {
Logger.debug("Actor retriever identity: " + identifyId);
this.identifyId = identifyId;
}
#Override
public Receive createReceive() {
Logger.info("RetrieveActors");
return receiveBuilder()
.match(String.class, request -> {
//Logger.info("Message: " + request + " " + new Date());
if(request.equalsIgnoreCase("run")) {
list = new ArrayList<>();
ActorSelection selection = getContext().actorSelection("/*");
selection.tell(new Identify(identifyId), getSelf());
//ask(selection, new Identify(identifyId), 1000).thenApply(response -> (Object) response).toCompletableFuture().get();
} else if(request.equalsIgnoreCase("result")) {
//Logger.debug("Run list: " + list + " " + new Date());
sender().tell(list, self());
} else {
sender().tell("Wrong command: " + request, self());
}
}).match(ActorIdentity.class, identity -> {
if (identity.correlationId().equals(identifyId)) {
ActorRef ref = identity.getActorRef().orElse(null);
if (ref != null) { // to avoid NullPointerExceptions
// Log or store the identity of the actor who replied
//Logger.info("The actor " + ref.path().toString() + " exists and has replied!");
list.add(ref.path().toString());
// We want to discover all children of the received actor (recursive traversal)
ActorSelection selection = getContext().actorSelection(ref.path().toString() + "/*");
selection.tell(new Identify(identifyId), getSelf());
}
}
sender().tell(list.toString(), self());
}).build();
}
}
I have this method:
public List<IncomeChannelCategoryMap> allIncomeChannels(final List<String> list) {
final CriteriaQuery<IncomeChannelCategoryMap> criteriaQuery = builder.createQuery(IncomeChannelCategoryMap.class);
final Root<IncomeChannelMapEntity> root = criteriaQuery.from(IncomeChannelMapEntity.class);
final List<Selection<?>> selections = new ArrayList<>();
selections.add(root.get(IncomeChannelMapEntity_.incomeChannel).get(IncomeChannelEntity_.code));
selections.add(root.get(IncomeChannelMapEntity_.logicalUnitCode));
selections.add(root.get(IncomeChannelMapEntity_.logicalUnitIdent));
selections.add(root.get(IncomeChannelMapEntity_.keyword));
criteriaQuery.multiselect(selections);
Predicate codePredicate = root.get(IncomeChannelMapEntity_.incomeChannel).get(IncomeChannelEntity_.code).in(list);
criteriaQuery.where(codePredicate);
return entityManager.createQuery(criteriaQuery).getResultList();
}
And this:
#Override
public List<IncomeChannelCategoryMap> allIncomeChannels(final EntityRequest<IncomeChannel> request) throws ApiException {
List<String> lists = request.getEntity().getIncomeChannels();
List<IncomeChannelCategoryMap> channels = incomeChannelMapDAO.allIncomeChannels(lists);
return new ArrayList<>(channels.stream().collect(Collectors.toMap(IncomeChannelCategoryMap::getIncomeChannelCode,
Function.identity(), (final IncomeChannelCategoryMap i1, final IncomeChannelCategoryMap i2) -> {
i1.setLogicalUnitIdent(i1.getLogicalUnitIdent() + "," + i2.getLogicalUnitIdent());
return i1;
})).values());
}
I am able to achieve this:
{
"incomeChannelCode": "DIRECT_SALES",
"logicalUnitCode": "R_CATEGORY",
"logicalUnitIdent": "7,8"
}
from
[
{
"incomeChannelCode": "DIRECT_SALES",
"logicalUnitCode": "R_CATEGORY",
"logicalUnitIdent": "7"
},
{
"incomeChannelCode": "DIRECT_SALES",
"logicalUnitCode": "R_CATEGORY",
"logicalUnitIdent": "8"
}
]
And everything is great but have one problem:
For example DIRECT_SALES can have another logicalUnitCode so right now im getting only one, and i want to achive for logicalUnitCode like i did for logicalUnitIdent.
Any suggestion?
So what I want to achieve is this:
{
"incomeChannelCode": "DIRECT_SALES",
"logicalUnitCode": "R_CATEGORY","R_TYPE",
"logicalUnitIdent": "7,8"
}
Here is your updated code:
#Override
public List<IncomeChannelCategoryMap> allIncomeChannels(final EntityRequest<IncomeChannel> request) throws ApiException {
List<String> lists = request.getEntity().getIncomeChannels();
List<IncomeChannelCategoryMap> channels = incomeChannelMapDAO.allIncomeChannels(lists);
return new ArrayList<>(channels.stream().collect(Collectors.toMap(IncomeChannelCategoryMap::getIncomeChannelCode,
Function.identity(), (i1, i2) -> {
i1.setLogicalUnitIdent(i1.getLogicalUnitIdent() + ", " + i2.getLogicalUnitIdent());
if (!i1.getLogicalUnitCode().contains(i2.getLogicalUnitCode())) {
i1.setLogicalUnitCode(i1.getLogicalUnitCode() + ", " + i2.getLogicalUnitCode());
}
return i1;
})).values());
}
Just like logicalUnitIdent now logicalUnitCode will also be grouped. Here I'm assuming that you don't want duplicates here. By duplicates I mean if logicalUnitCode is "R_CATEGORY" for both the results then you want it once as output. And if one is "R_CATEGORY" and the other one is "R_TYPE" then you want them to be grouped as "R_CATEGORY, R_TYPE" as output. If my assumption is correct then this is your required answer.
I am trying to get a solution to the following problem.
How can I find values from "conditions" in "stream"?
At the moment I can only filter with the "line.contains method". But I want that the user can give a number of conditions which would be saved in the Array "conditions". I tried to build a for-loop in the stream.filter but I failed.^^ Maybe you know an efficient way. :)
Thanks.
private static void streamSelectedFile(String p, String[] conditions) {
try (
Stream<String> stream = Files.lines(Paths.get(p), StandardCharsets.ISO_8859_1)) {
Stream<String> filteredStream =
stream.filter(line -> line.contains("conditionValue"));
filteredStream.forEach(elem -> {
System.out.println(elem + " Path: " + p);
});
} catch (IOException e) {
...
}
}
Use allMatch
stream.filter(line -> Stream.of(conditions).allMatch(line::contains))
i succeeded in deploying vertical but i still got this error : Result is already complete: succeeded. I don't understand why, I need some explanation this is my Deploy class :
public class Deploy {
private static Logger logger = LogManager.getLogger(Deploy.class);
private static void deployAsynchronousVerticalByIndex(Vertx vertx, int indexCurrentDeploy, JsonArray verticalArray, Future<Void> startFuture, JsonObject jsonObjectConfig) {
JsonObject currentVertical = verticalArray.getJsonObject(indexCurrentDeploy);
currentVertical.forEach(entry -> {
logger.debug("Starting deploy of class: " + entry.getKey() + ", With the config: " + entry.getValue() + ".");
DeploymentOptions optionsDeploy = new DeploymentOptions().setConfig(jsonObjectConfig);
ObservableFuture<String> observable = RxHelper.observableFuture();
vertx.deployVerticle(entry.getKey(), optionsDeploy, observable.toHandler());
observable.subscribe(id -> {
logger.info("Class " + entry.getKey() + " deployed.");
if (indexCurrentDeploy + 1 < verticalArray.size()) {
deployAsynchronousVerticalByIndex(vertx, indexCurrentDeploy + 1, verticalArray, startFuture, jsonObjectConfig);
} else {
logger.info("ALL classes are deployed.");
startFuture.complete();
}
}, err -> {
logger.error(err, err);
startFuture.fail(err.getMessage());
});
});
}
public static void deployAsynchronousVertical(Vertx vertx, JsonArray verticalArray, Future<Void> startFuture, JsonObject jsonObjectConfig) {
deployAsynchronousVerticalByIndex(vertx, 0, verticalArray, startFuture, jsonObjectConfig);
}
}
That's because you reuse your future between verticles, and have a race condition there.
Simplest way to fix that would be :
if (startFuture.isComplete()) {
startFuture.complete();
}
But that actually would only obscure the problem.
Extract your observable out of the loop, so it actually would listen only once for every vertice.
JsonObject currentVertical = verticalArray.getJsonObject(indexCurrentDeploy);
ObservableFuture<String> observable = RxHelper.observableFuture();
currentVertical.forEach(entry -> {
logger.debug("Starting deploy of class: " + entry.getKey() + ", With the config: " + entry.getValue() + ".");
DeploymentOptions optionsDeploy = new DeploymentOptions().setConfig(jsonObjectConfig);
vertx.deployVerticle(entry.getKey(), optionsDeploy, observable.toHandler());
});
observable.subscribe(id -> {
logger.info("Class " + id + " deployed.");
if (indexCurrentDeploy + 1 < verticalArray.size()) {
deployAsynchronousVerticalByIndex(vertx, indexCurrentDeploy + 1, verticalArray, startFuture, jsonObjectConfig);
} else {
logger.info("ALL classes are deployed.");
if (startFuture.isComplete()) {
startFuture.complete();
}
}
}, err -> {
logger.error(err, err);
startFuture.fail(err.getMessage());
});
and in addition to Extracting your observable out of the loop,
it is a good idea to use io.vertx.core.Promise instead of io.vertx.core.Future.
and use promise.tryXXXX()
according to io.vertx.core.Promise.java documentation:
Future is #deprecated. instead use Promise.
tryFail is Like fail but returns false when the promise is already
completed instead of throwing IllegalStateException. and it returns
true otherwise.
so...
instead of
if (startFuture.isComplete())
startFuture.complete();
if (startFuture.isFail())
startFuture.fail(err.getMessage());
use
promise.tryComplete()
promise.tryFail(err.getMessage());