I'm writing an eclipse pluging and I need to open an XML file at a specific line number (where the error is).
I have followed the accepted answer on this question and it actually works... with the undesired side effect of generating resourceChanged() events in my FileSystemChangesListener listener.
Is there a way of jumping to the specific line without producing file changes? These events trigger other executions in the plugin.
I tried adding the TRANSIENT parameter as true to no avail as in:
HashMap<String, Object> map = new HashMap<String, Object>();
map.put(IMarker.LINE_NUMBER, lineNumber);
map.put(IMarker.TRANSIENT, true); // doesn't make any difference.
marker.setAttributes(map);
IDE.openEditor(page, marker);
Still generates the resourceChanged() event.
The IFile.createMarker call is generating the resource changed event, you can't prevent this.
However you can identify that this is a create marker event in the IResourceData you receive - the getFlags() method will have the IResourceData.MARKERS flag set.
Note that resource deltas can be merged so there may be several flags set - for example if IResourceDelta.CONTENT is set the file's contents have also changed.
Related
I'm using structured logging in a Spring Boot app using logstash and sometimes I want to include key values in the log that I don't want to be used in the message of the log. Is there a StructuredArgument or similar that allows for this?
An example of what I am doing currently is something like this:
// import lombok.extern.slf4j.Slf4j;
// import static net.logstash.logback.argument.StructuredArguments.kv;
// import static net.logstash.logback.argument.StructuredArguments.v;
log.info(
"My message with one arg {}",
v("key1":"arg I want to include value in place of placeholder in message and in json as key/value"),
kv("key2", "arg I only want in the json and not in the message"))
Everything works as I intended, by which I mean the log includes both key value pairs and the message only includes the first value in place of the placeholder. The issue is that I get a warning from the compiler which is flagged by intellij (PlaceholderCountMatchesArgumentCount) about the second structured argument and I would like to avoid this without resorting to suppressing/ignoring it
You can use Markers and pass it before your logging message - more details on github.
logger.info(append("key2", "only json"),
"My message with one arg {}",
v("key1":"arg in msg and json"));
I personally don't like this because markers have different purpose, so if structured argument works for you, just ignore warning in IDE.
Anyway, all this json/structured implementations are workarounds for SLF4J 1.*, which has not built for that. There was SLF4J 2.0.0-alpha1 release almost a yeah ago, but it is still in alpha and I haven't used it. But it's API should be ready for key-values that are crusial in nowadays distributed log management systems.
You can make the log message as a constant String, then the code quality checks will not warn this
You can make the structured argument print nothing into the formatted message:
(1) include the second placeholder {} inside the message
(2) use keyValue() instead of kv()
(3) provide the optional messageFormatPattern parameter (JavaDoc) equal to ""
Adjusting your example:
log.info(
"My message with one arg {}{}", //note (1)
v("key1":"arg I want to include value in place of placeholder in message and in json as key/value"),
keyValue("key2", "arg I only want in the json and not in the message", "")) //note (2) + (3)
This will effectively replace the second placeholder with an empty string.
I'd like to read and change zoom level of named destinations in a pdf file using iText 7. Ive come up with the following code:
Map<String, PdfObject> names =
document.getCatalog().getNameTree(PdfName.Dests).getNames();
for (Map.Entry<String, PdfObject> dest : names.entrySet()) {
if (dest.getValue().isArray()) {
PdfArray arr = (PdfArray) dest.getValue();
PdfName asName = arr.getAsName(1); // /Fit
arr.set(1, FitR);
//System.out.println();
arr.setModified();
}
}
However, this code fails to work against my example file and has other flaws as well.
Most importantly, it tries to deal with one type of zoom (/Fit), but other types (/XYZ and so on) should also be handled with. Second, I don't know how to get the page number of named destination as key pair named of destination and its zoom value doesn't seem to have this information. Please see a screenshot of debug session below:
Note, at SO there is already a question dealing with exactly the same topic. The thing is the answer to that question give too little information to deal with this problem.
I'm trying to create an SNMP4j agent and am finding it difficult to understand the process correctly. I have successfully created an agent that can be queried from the command line using snmpwalk. What I am having difficulty with is understanding how I am meant to update the values stored in my implemented MIB.
The following shows the relevant code I use for creating the MIB (I implement Host-Resources-MIB)
agent = new Agent("0.0.0.0/" + port);
agent.start();
agent.unregisterManagedObject(agent.getSnmpv2MIB());
modules = new Modules(DefaultMOFactory.getInstance());
HrSWRunEntryRow thisRow = modules.getHostResourcesMib().getHrSWRunEntry()
.createRow(oidHrSWRunEntry);
final OID ashEnterpriseMIB = new OID(".1.3.6.1.4.1.49266.0");
thisRow.setHrSWRunIndex(new Integer32(1));
thisRow.setHrSWRunName(new OctetString("RunnableAgent"));
thisRow.setHrSWRunID(ashEnterpriseMIB);
thisRow.setHrSWRunPath(new OctetString("All is good in the world")); // Max 128 characters
thisRow.setHrSWRunParameters(new OctetString("Everything is working")); // Max 128 characters
thisRow.setHrSWRunType(new Integer32(HrSWRunTypeEnum.application));
thisRow.setHrSWRunStatus(new Integer32(HrSWRunStatusEnum.running));
modules.getHostResourcesMib().getHrSWRunEntry().addRow(thisRow);
agent.registerManagedObject(modules.getHostResourcesMib());
This appears to be sufficient to create a runnable agent. What I do not understand is how I am meant to change the values stored in the MIB (how do I, for example, change the value of HrSWRunStatus). There seem to be a few kludge ways but they don't seem to fit with the way the library is written.
I have come across numerous references to using/overriding the methods
prepare
commit
undo
cleanup
But cannot find any examples where this is done. Any help would be gratefully received.
In protected void registerManagedObjects(), you need to do something like new MOMutableColumn(columnId, SMIConstants.SYNTAX_INTEGER, MOAccessImpl.ACCESS_READ_WRITE, null); for your HrSWRunStatus. Take a look at the TestAgent.java example of SNMP4J-agent source.
I currently have a job that works like this:
tPrejob-->tOracleConnection1--->tOracleConnection2--->tSetGlobalVar1(timestamp)--->tRunjob(runs prejob to gather file from FTP)
Then there is a tPostjob that is supposed to rename the processed file on the FTP server.
tPostjob--->tFTPRename
It should be renaming the file with "File Processed On " + ((String)globalMap.get("timestamp")) + "This is where I would put the orginal file name"
If I put a standard filename into the Filemask then it will run correctly, however if I try to make the filemask dynamic by passing the filename into it through globalMap.get then I get the Error:
"Exception in component tFTPRename_1 java.lang.NullPointerException"
I've tried several methods for passing the file name into the tFTPRename component, but none are working.
I'm currently capturing the file name in the subjob and outputting it to a txt file and then using tFileInputFullRow on the main job to create that variable. I tried passing this into a tSetGlobalVar and then adding it into the filemask as ((String)globalMap.get("FileName")), but had no luck.
I also tried several methods on the internet, but none of them worked, so I wasn't sure if it was me or if it has something to do with tFTPRename capabilities.
Main Job:
PreJob:
tFTPRename Component:
tFileInputFullRow:
It sounds like you're using the globalMap wrong at some point which would certainly explain the null pointer exception as the globalMap variable doesn't appear to have been set.
Typically the tSetGlobalVar component is for setting static or run time generated variables into the globalMap and I don't think you can actually pass data into it that it can then directly use and push to the globalMap. Your datetime stamp is a good use of the component but you'll need to either use a tFlowToIterate component or use a tJava(Row) component to force the data into the globalMap using something like:
globalMap.put("fileName",inputrow.fileName);
Looking at your previous question then you should have the name of the file from the FTP in the job you are calling in your pre job. Typically here you would be able to then run that as part of the main flow into a tBufferOutput component and then read the data directly into the parent job (simply connect a main flow connector from the tRunJob component to the next component you want to process the data flow and don't forget to give the tRunJob component the same schema as your child job's tBufferOutput).
However, you have a complication here in that you have already used the buffer to capture all of the iterables from the tFTPList component so you're right in the fact that you need to go to a temporary flat file or database to push the state back to the parent job.
From here though you should be able to read in the flat file or database table that contains the field name in your parent job and then run for ease you can just connect this to a tFlowToIterate component which will then store that data in the globalMap (you should have 1 row and 1 column of data here so it's a single variable).
Here's a basic example of running some hard coded data in a tFixedFlowInput to a tFlowToIterate to get it into the globalMap and then retrieve it again with another tFixedFlowInput component:
Once the data is in the tFlowToIterate component then you can easily call it with globalMap.get(rowName.schemaColumnName) or by hitting ctrl+space and selecting it under the tFlowToIterate component:
I'm attempting to execute an Upsert using the Novell JLDAP library, unfortunately, I'm having trouble finding an example of this. Currently, I have to:
public EObject put(EObject eObject){
Subject s = (Subject) eObject;
//Query and grab attributes from subject
LDAPAttributes attr = resultsToAttributes(getLDAPConnection().get(s));
//No modification needed - return
if(s.getAttributes().equals(attr)){
return eObject;
} else {
//Keys:
//REPLACE,ADD,DELETE, depending on which attributes are present in the maps, I choose the operation which will be used
Map<String,LDAPAttribute> operationalMap = figureOutWhichAttributesArePresent(c.getAttributes(),attr);
//Add the Modifcations to a modification array
ArrayList<LDAPModification> modList = new ArrayList<LDAPModification>();
for(Entry entry: operationalMap.getEntrySet()){
//Specify whether it is an update, delete, or insert here. (entry.getKey());
modList.add(new LDAPModification(entry.getKey(),entry.getValue());
}
//commit
connection.modify("directorypathhere",modList.toArray(new LDAPModification[modList.size()]));
}
I'd prefer to not have to query on the customer first, which results in cycling through the subject's attributes as well. Is anyone aware if JNDI or another library is able to execute an update/insert without running multiple statements against LDAP?
Petesh was correct - the abstraction was implemented within the Novell library (as well as the UnboundId library). I was able to "upsert" values using the Modify.REPLACE param for every attribute that came in, passing in null for empty values. This effectively created, updated, and deleted the attributes without having to parse them first.
In LDAP, via LDIF files, an upset would be a single event with two steps. A remove and add of a value. This is denoted by a single dash on a line, between the remove then the add.
I am not sure how you would do it in this library. I would would try to modList.remove and then modList.add one after another and see if that works.