Regex for lines from /etc/passwd and /etc/group - java

I've been working on a small Java problem set and have come across some trouble. I'm not very experienced writing regular expressions and could really use two for verifying line entries in /etc/group and /etc/passwd in Java.
I found Regex Verification of Line in /etc/passwd earlier and have yet to test it, but it looks adaptable for what I need. Could anyone else help in providing a regex string for either file?
I'm looking to verify user-entered passwd and group lines, in java, before writing them out to disk. If not, I'll likely end up tokenizing each piece and running various expensive operations.

Rather than writing a regex you should probably just read the files with Scanner and parse each line with String.split(":"). Then you can check that each part is valid without dealing with a complex expression to handle all cases. It'll probably be easier to write the code and easier to read it later.

Why do you want to use regular expressions? Just split the line on the colons and inspect the pieces.

Related

Java, how to recognize a part of a token as a separate token?

Hopefully my title isn't completely terrible. I don't really know what this should be called. I'm trying to write a very basic scheme parser in Java. The issue I'm having is with implementation.
I open a file, and I want to parse individual tokens:
while(sc.hasNext()) {
System.out.println(sc.next());
}
Generally, to get tokens, this is fine. But in scheme, recognizing the begining and end of a list is crucial; my program's functionality depends on this, so I need a way to treat a token such as:
(define
or
poly))
As multiple tokens, where any parentheses is its own token:
(
define
poly
)
)
If I can do that, I can properly recognize different symbols to add to my symtab, and know when/how to add nodes to my parse tree.
The Java API shows that the scanner class doesn't have any methods for doing exactly what I want. The closest thing I could think of is using the parantheses as custom delimiters, which would make each token clean enough to be recognized more easily by my logic, but then what happens to my parentheses?
Another method I'm thinking about is forgoing the Java tokenizer, and just scanning char by char until I find a complete symbol.
What should I do? Try to work around the Java scanner methods, or just do a character by character approach?
First, you need to get your terminology straight. (define is not a single token; it's a ( token followed by a define one. Similarly, poly)) is not a single token, it's three.
Don't let java.util.Scanner (that's what you're using, right?) throw you for a loop -- when you say "Generally, to get tokens, this is fine", I say no, it's not. Don't settle for what it provides if it's not enough.
To correctly tokenize Scheme code, I'd expect you need to at least be able to deal with regular languages. That would probably be very tough to do using Scanner, so here's a couple of alternatives:
learn and apply a tried-and-true parsing tool like Antlr or Lex. Will be beneficial for any of your future parsing projects
roll your own regular expression approach (I don't know Scheme well enough to be sure that this will work) for tokenizing, but don't forget that you need at least context-free for full parsing
learn about parser combinators and recursive descent parsing, which are relatively easy to implement by hand -- and you'll end up learning a ton about Java's type system

How does java.util.Scanner work?

I have a simple language which consists of patterns like
size(50*50)
start(10, 20, -x)
forward(15)
stop
It's an example of turtle-drawing language. I need to properly tokenize it. The above is a source code instance. Statements and expressions are separated with newlines. I set up my Scanner to use delimiters like newlines. I expect next("start") to eat the string "start", and then I issue next("(") to eat the first parenthesis. It appears however, that it does something else than I expect. Has the scanner already broken the above into tokens based on delimiter and/or do I need to approach this differently? For me, "start", "(", "50", "*", "50" and ")" on the first line would constitute separate tokens, which appears to be an unfulfilled expectation here. How can I tokenize the above with as little code as possible? I don't currently need to write a tokenizer, I am writing an interpreter, so tokenizing is something I don't want to spend my time on currently, I just like Scanner to work with me here.
My useDelimiter call is as follows:
Scanner s ///...
s.useDelimiter(Pattern.compile("[\\s]&&[^\\r\\n]"));
Issuing first next call gives me the entire file contents. Without the above call, it gives me entire first line.
To write a proper parser, you need to define your language in a formal grammar. Trust me, you want to do it properly or you will have problems downstream.
You can probably represent your tokens as regular expressions at the lowest level, but first you need to be clear about your grammar, which is combinations of tokens in lexical structures. You can represent this as recursive functions (methods), known as Productions. Each Production function can use scanner to test whether or not it is looking at a token it wants. But scanner will consume the input and you can't reverse.
If you used Scanner, you will find the following things unsuitable:
It will always parse a token according to the regular expression,
1.1 so even if you do get a token you can use, you will have to write more code to decide exactly what token it was
1.2 and you may not be able to represent your language grammar as one big expression
You can't re-wind. A look-ahead parser (required for lots of grammars like yours) needs to be able to look ahead at the input stream and then decide, if it wants, not to use the input and to let another token parser function use it.
I suggest you write the character lexer yourself, and iterate over a string / array of chars rather than a stream. Then you can re-wind.
Otherwise, use a ready-built lexer/parser framework like yacc or Coco/R.
The class java.io.StreamTokenizer may be a better fit. It is used in this example of a recursive descent parser.
Addendum: What is the principal difference between the StreamTokenizer and Scanner classes?
Either can do the lexical analysis required by a parser. StreamTokenizer is lighter weight but limited to four, pre-defined meta-tokens. Scanner is considerably more flexible, but somewhat more cumbersome to use. Here's a comparison of the two and variation on the latter.

understanding regex if then statements

So I'm not sure if I understand how this works and would like
a simple explanation to how they work is all. I probably have it way off. A pure regex solution is required, and I don't know if this is possible. If it is, a solution would be awesome too, but a shove in the right direction would be good for my learning process ^_^
This is how I thought the if/then/else option built into my regex engines was formatted:
?(condition)if regex|else regex
I want it to capture a string from a very specific location only when this string exists within a certain section of javascript. Because this is how I thought it worked after a decent amount of research I tried out a few variations of this code but they all ended up something like this.
((?^view_large$)Tables-137(.*?)search.htm)
Also of relevance: I'm using an java based app that has regex searches which pull the data I need so I cannot write an if statement in java which would be my preferred method. It's a pain to have to do it this way, but at the moment I have no other choice. I'm trying really hard for them to allow java code functionality instead of pure regex for more versatile options.
So to summarize, is there even a if/then option in regex and if so how is it formatted for what I'm trying to accomplish?
EDIT: The string that I want to be the "if condition" is like this: if view_large string exists and is not null then capture the exact string 500/ which is captured within the catch all group I used: (.*?)
There is no conditionals in Java regexp, but you can simulate them by writing two expressions that include mutually exclusive look-behind constructs, like this:
((?<=if )then)|((?<!if )end)
This expression will match "then" when it is preceded by an "if "; it will match "end" when it is not preceded by an "if "
The Javadoc for java.util.regex.Pattern mentions, in its list of "Perl constructs not supported by this class":
The conditional constructs (?(condition)X) and (?(condition)X|Y).
So, no dice. But you should look through the Javadoc to see if you can achieve what you need by using regex features that it does support. (Or, if you post some more detailed examples, we can try to help.)
Try lookaround assertions.
For example, say you want to capture FOOBAR only if there is a 4+ digit number somewhere:
(?=.*\d{4}).*(FOOBAR)

When would it be worth using RegEx in Java?

I'm writing a small app that reads some input and do something based on that input.
Currently I'm looking for a line that ends with, say, "magic", I would use String's endsWith method. It's pretty clear to whoever reads my code what's going on.
Another way to do it is create a Pattern and try to match a line that ends with "magic". This is also clear, but I personally think this is an overkill because the pattern I'm looking for is not complex at all.
When do you think it's worth using RegEx Java? If it's complexity, how would you personally define what's complex enough?
Also, are there times when using Patterns are actually faster than string manipulation?
EDIT: I'm using Java 6.
Basically: if there is a non-regex operation that does what you want in one step, always go for that.
This is not so much about performance, but about a) readability and b) compile-time-safety. Specialized non-regex versions are usually a lot easier to read than regex-versions. And a typo in one of these specialized methods will not compile, while a typo in a Regex will fail miserably at runtime.
Comparing Regex-based solutions to non-Regex-bases solutions
String s = "Magic_Carpet_Ride";
s.startsWith("Magic"); // non-regex
s.matches("Magic.*"); // regex
s.contains("Carpet"); // non-regex
s.matches(".*Carpet.*"); // regex
s.endsWith("Ride"); // non-regex
s.matches(".*Ride"); // regex
In all these cases it's a No-brainer: use the non-regex version.
But when things get a bit more complicated, it depends. I guess I'd still stick with non-regex in the following case, but many wouldn't:
// Test whether a string ends with "magic" in any case,
// followed by optional white space
s.toLowerCase().trim().endsWith("magic"); // non-regex, 3 calls
s.matches(".*(?i:magic)\\s*"); // regex, 1 call, but ugly
And in response to RegexesCanCertainlyBeEasierToReadThanMultipleFunctionCallsToDoTheSameThing:
I still think the non-regex version is more readable, but I would write it like this:
s.toLowerCase()
.trim()
.endsWith("magic");
Makes the whole difference, doesn't it?
You would use Regex when the normal manipulations on the String class are not enough to elegantly get what you need from the String.
A good indicator that this is the case is when you start splitting, then splitting those results, then splitting those results. The code is getting unwieldy. Two lines of Pattern/Regex code can clean this up, neatly wrapped in a method that is unit tested....
Anything that can be done with regex can also be hand-coded.
Use regex if:
Doing it manually is going to take more effort without much benefit.
You can easily come up with a regex for your task.
Don't use regex if:
It's very easy to do it otherwise, as in your example.
The string you're parsing does not lend itself to regex. (it is customary to link to this question)
I think you are best with using endsWith. Unless your requirements change, it's simpler and easier to understand. Might perform faster too.
If there was a bit more complexity, such as you wanted to match "magic", "majik', but not "Magic" or "Majik"; or you wanted to match "magic" followed by a space and then 1 word such as "... magic spoon" but not "...magic soup spoon", then I think RegEx would be a better way to go.
Any complex parsing where you are generating a lot of Objects would be better done with RegEx when you factor in both computing power, and brainpower it takes to generate the code for that purpose. If you have a RegEx guru handy, it's almost always worthwhile as the patterns can easily be tweaked to accommodate for business rule changes without major loop refactoring which would likely be needed if you used pure java to do some of the complex things RegEx does.
If your basic line ending is the same everytime, such as with "magic", then you are better of using endsWith.
However, if you have a line that has the same base, but can have multiple values, such as:
<string> <number> <string> <string> <number>
where the strings and numbers can be anything, you're better of using RegEx.
Your lines are always ending with a string, but you don't know what that string is.
If it's as simple as endsWith, startsWith or contains, then you should use these functions. If you are processing more "complex" strings and you want to extract information from these strings, then regexp/matchers can be used.
If you have something like "commandToRetrieve someNumericArgs someStringArgs someOptionalArgs" then regexp will ease your task a lot :)
I'd never use regexes in java if I have an easier way to do it, like in this case the endsWith method. Regexes in java are as ugly as they get, probably with the only exception of the match method on String.
Usually avoiding regexes makes your core more readable and easier for other programmers. The opposite is true, complex regexes might confuse even the most experience hackers out there.
As for performance concerns: just profile. Specially in java.
If you are familiar with how regexp works you will soon find that a lot of problems are easily solved by using regexp.
Personally I look to using java String operations if that is easy, but if you start splitting strings and doing substring on those again, I'd start thinking in regular expressions.
And again, if you use regular expressions, why stop at lines. By configuring your regexp you can easily read entire files in one regular expression (Pattern.DOTALL as parameter to the Pattern.compile and your regexp don't end in the newlines). I'd combine this with Apache Commons IOUtils.toString() methods and you got something very powerful to do quick stuff with.
I would even bring out a regular expression to parse some xml if needed. (For instance in a unit test, where I want to check that some elements are present in the xml).
For instance, from some unit test of mine:
Pattern pattern = Pattern.compile(
"<Monitor caption=\"(.+?)\".*?category=\"(.+?)\".*?>"
+ ".*?<Summary.*?>.+?</Summary>"
+ ".*?<Configuration.*?>(.+?)</Configuration>"
+ ".*?<CfgData.*?>(.+?)</CfgData>", Pattern.DOTALL);
which will match all segments in this xml and pick out some segments that I want to do some sub matching on.
I would suggest using a regular expression when you know the format of an input but you are not necessarily sure on the value (or possible value(s)) of the formatted input.
What I'm saying, if you have an input all ending with, in your case, "magic" then String.endsWith() works fine (seeing you know that your possible input value will end with "magic").
If you have a format e.g a RFC 5322 message format, one cannot clearly say that all email address can end with a .com, hence you can create a regular expression that conforms to the RFC 5322 standard for verification.
In a nutshell, if you know a format structure of your input data but don't know exactly what values (or possible values) you can receive, use regular expressions for validation.
There's a saying that goes:
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. (link).
For a simple test, I'd proceed exactly like you've done. If you find that it's getting more complicated, then I'd consider Regular Expressions only if there isn't another way.

distinguishing a string with flex

I need to tokenize some strings which will be splitted of according to operators like = and !=. I was successful using regex until the string has != operator. In my case, string was seperated into two parts, which is expected but ! mark is in the left side even it is part of given operator. Therefore, I believe that regex is not suitable for it and I want to benefit from lex. Since I do not have enough knowledge and experience with lex, I am not sure whether it fits my work or not. Basically, I am trying to do replace the right hand side of the operators with actual values from other data. Do you people think that can it be helpful for my case?
Thanks.
Should you use lex? It depends how complex your language is. It's a very powerful tool, worth understanding (especially with yacc, or in Java you could use antlr or javacc).
public String[] split(String regex) does take a regex, not just a string. You could use the regex "!?=", which means zero or one ! followed by =. But the problem with using split is that it won't tell you what the actual delimiter was.
With what little info we have about your application, I'd be tempted to use regular expressions. There are lots of experts here on stackoverflow to help. A great place to start is the Java regex tutorial.
(Thanks to Falle1234 for picking up my mistake - now corrected.)

Categories