Smack user validation and Nodeprep profile of stringprep - java

I'm working with Smack library and as I understand there is no function for verifying user jid, which is used in creating new Connection. (Please correct me if I'm wrong)
So I decided to write a new one and for this purpose I started to investigate RFC-6122 which contains ABNF block with validation rules.
Unfortunately I'm not very aware of very-Unicode specific things and BNF-related things, so I didn't understand how to make correct regular expression according to this BNF block. Especially I'm confused by such thing as "Nodeprep profile of stringprep" mentioned in ABNF block.
Could you please clarify this one or give me some advices?

It's defined in RFC 6122, Appendix A, but that's unlikely to help you without also reading RFC 3454, and a bunch of other source material. It's quite an undertaking to implement, so I strongly suggest you use an existing Stringprep library, such as libidn.

Related

Minimum required properties in ESAPI.properties

My web application uses only the following ESAPI encode methods:
ESAPI.encoder().encodeForLDAP()
ESAPI.encoder().encodeForHTML()
In this case, what is the minimum required properties in ESAPI.properties?
Now I'm using ESAPI 2.1.0.1 and this properties.
If you are just using the encoder() function, the 3 lines in the encoder section is all you need. Lines 99-119 (between all the comments).
Edit
Plus you must specify a default encoder. Example:
ESAPI.Encoder=org.owasp.esapi.reference.DefaultEncoder
Encoder.AllowMultipleEncoding=false
Encoder.AllowMixedEncoding=false
Encoder.DefaultCodecList=HTMLEntityCodec,PercentCodec,JavaScriptCodec
I think I answered a previous question.
Again you're the victim of some bad design choices back at the beginning of the ESAPI project between 2009-2011. Namely, the Singleton-based monolith.
ESAPI's current design is monolithic, meaning it tries to be everything to everyone. As you're well aware, this isn't the best design strategy, but that's where we're at.
We have several proposals for breaking various functions out into separate libraries, but that's future work towards building ESAPI 3.0.
For your current dilemma, there's too much of the library that is dependent upon functionality that it sounds like you don't need and don't intend to use. Unfortunately, that is simply the current fact of life. No one has ever seemed to use our authentication interface--but its there for everybody, even if they don't need it. Most users use our encoding/decoding capability first, followed by the validation API and then crypto. The last couple are log injection and the WAF.
Most users of ESAPI take the non-prod test file, and leave it at that. (This is a really bad idea.)
The others take the one you reference and work through the exceptions, asking us questions on the mailing list.
This is not an ideal path to walk either, but it's the path we're in right now.
The danger from my perspective, is if you choose to implement happy-path configurations for the ones ESAPI is throwing exceptions towards, with the goal of JUST making it happy so you can get to your two narrow use-cases.
Then you get promoted and another developer on your app is faced with a problem that she thinks is solved because you handled all the integration with ESAPI.
PAY ATTENTION TO THE PARTS OF ESAPI THAT DON'T PERTAIN TO YOUR USE CASE. This isn't ideal, but its where we're at in 2017. Ask us questions on the user list.
Failure to do so--especially in the crypto portion, will leave your application vulnerable in the future.
RegEx used in ESAPI.validator().getValidInput(..) calls
Validator.COMPANY_ID_PTRN=[a-zA-Z0-9]+
Validator.USER_DN_PTRN=[a-zA-Z0-9=,]+
Validator.ROLE_DN_PTRN=[a-zA-Z0-9=,^\- ]+
Minimum default settings
ESAPI.Encoder=org.owasp.encoder.esapi.ESAPIEncoder
ESAPI.Logger=org.owasp.esapi.logging.slf4j.Slf4JLogFactory
Logger.ApplicationName=TrianzApp
Logger.LogEncodingRequired=false
Logger.LogApplicationName=false
Logger.LogServerIP=false
Logger.UserInfo=false
Logger.ClientInfo=false
IntrusionDetector.Disable=true
ESAPI.Validator=org.owasp.esapi.reference.DefaultValidator
Encoder.AllowMixedEncoding=false
Encoder.AllowMultipleEncoding=false
ESAPI.printProperties=false

Java code change analysis tool - e.g tell me if a method signature has changed, method implementation

Is there any diff tool specifically for Java that doesn't just highlight differences in a file, but is more complex?
By more complex I mean it'd take 2 input files, the same class file of different versions, and tell me things like:
Field names changed
New methods added
Deleted methods
Methods whose signatures have changed
Methods whose implementations have changed (not interested in any more detail than that)
Done some Googling and can't find anything like this...I figure it could be useful in determining whether or not changes to dependencies would require a rebuild of a particular module.
Thanks in advance
Edit:
I suppose I should clarify:
I'm not bothered about a GUI for the tool, it'd be something I'm interested in calling programmatically.
And as for my reasoning:
To workout if I need to rebuild certain modules/components if their dependencies have changed (which could save us around 1 hour per component)... More detailed explanation but I don't really see it as important.
To be used to analyse changes made to certain components that we are trying to lock down and rely on as being more stable, we are attempting to ensure that only very rarely should method signatures change in a particular component.
You said above that Clirr is what you're looking for.
But for others with slightly differet needs, I'd like to recommend JDiff. Both have pros and cons, but for my needs I ended up using JDiff. I don't think it'll satisfy your last bullet point and it's difficult to call programmatically. What it does do is generate a useful report for API differences.

Automated method of verifying backlinks / URLs?

Does anyone have suggestions on automated ways to verify that backlinks are valid? I realize there are many criteria for determining this so I am open to all sorts of suggestions.
A few example criteria would be backlinks coming from specific domains, hosts, etc. but what about criteria that is not so easy to determine such as age of the link, subject matter of the site where the link originates, etc.
P.S., Although the above is a general question, I'm specifically looking for how to do this in Java.
I would suggest you ask the generic question on Programmers because determining what is a "good" backlink is not specific to any language.
If you have the criteria, try implementing them for yourself first and then ask a specific question if you don't come along.
The java.net.URL Class handles addresses well you can get the domain and other stuff nicely from there.

Understanding this architecture

I have inherited a massive system from my predecessor and I am beginning to understand how it works but I cant fathom why.
It's in java and uses interfaces which, should add an extra layer, but they add 5 or 6.
Here's how it goes when the user interface button is pressed and that calls a function which looks like this
foo.create(stuff...)
{
bar.create;
}
bar.create is exactly the same except it calls foobar.creat and that in turn calls barfoo.create. this goes on through 9 classes before it finds a function that accessed the database.
as far as I know each extra function call incurs more performance cost so this seems stupid to me.
also in the foo.create all the variables are error checked, this makes sense but in every other call the error checks happen again, it looks like cut and paste code.
This seems like madness as once the variables are checked once they should not need to be re checked as this is just wastinh processor cycles in my opinion.
This is my first project using java and interfaces so im just confused as to whats going on.
can anyone explain why the system was designed like this, what benefits/drawbacks it has and what I can do to improve it if it is bad ?
Thank you.
I suggest you look at design patterns, and see if they are being used in the project. Search for words like factory and abstract factory initially. Only then will the intentions of the previous developer be understood correctly.
Also, in general, unless you are running on a resource constrained device, don't worry about the cost of an extra call or level of indirection. If it helps your design, makes it easier to understand or open to extension, then the extra calls are worth making.
However, if there is copy-paste in the code, then that is not a good sign, and the developer probably did not know what he was doing.
It is very hard to understand what exactly is done in your software. Maybe it even makes sense. But I've seen couple of projects done by some "design pattern maniacs". It looked like they wanted to demonstrate their knowledge of all sorts of delegates, indirections, etc. Maybe it is your case.
I cannot comment on the architecture without carefully examining it, but generally speaking separation of services across different layers is a good idea. That way if you change implementation of one service, other service remains unchanged. However this will be true only if there is loose coupling between different layers.
In addition, it is generally the norm that each service handles exceptions that specifically pertains to the kind of service it provides leaving the rest to others. This also allows us to reduce the coupling between service layers.

What else do I need to know about implementing a one-time-password system?

I've been tasked with creating a One Time Password (OTP) system which will eventually be used to create OTP generators on mobile devices.
We're looking at using HOTP (rfc 4226) using a counter, but maybe with some variations. We are not required to be OATH compliant.
This is my first experience in the security/cryptographic realm, so I'm trying to avoid (and learn about) security pitfalls that ensnare security rookies, as well as gain a better understanding of what I'll need to do and know to complete this task.
In addition to this general advice, I've got a few specific questions about implementing this project:
Is HOTP still considered secure, even if it's just using SHA-1? One of my coworkers suggested we should be using HMAC-SHA-512. It looks easy enough to switch which underlying algorithm we're using. Are there any side effects here I should know? Such as increased processing time?
I've got concerns about the counter synchronization. What should I be using as a sane look-ahead for possible counter values? What are the best ways to get back in sync if the user has clicked ahead past our look-ahead limit? Would it be easier to display and send the counter along with the corresponding OTP, or does that significantly weaken security?
I also don't have a good understanding of best practices for securely storing related information, such as the shared secret and counter values.
When you answer, please keep in mind I'm new to this domain, and am still trying to catch up on the jargon and acronyms. Thanks in advance.
HMAC-SHA1 is quite a bit stronger than SHA-1, which I believe has been broken. my memory is somewhat foggy, but googling around found some more info on HMAC-SHA-1 vs SHA-1 here.

Categories