Regular expression with xml tag [duplicate] - java

There is no day on SO that passes without a question about parsing (X)HTML or XML with regular expressions being asked.
While it's relatively easy to come up with examples that demonstrates the non-viability of regexes for this task or with a collection of expressions to represent the concept, I could still not find on SO a formal explanation of why this is not possible done in layman's terms.
The only formal explanations I could find so far on this site are probably extremely accurate, but also quite cryptic to the self-taught programmer:
the flaw here is that HTML is a Chomsky Type 2 grammar (context free
grammar) and RegEx is a Chomsky Type 3 grammar (regular expression)
or:
Regular expressions can only match regular languages but HTML is a
context-free language.
or:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
or:
The Pumping lemma for regular languages is the reason why you can't do
that.
[To be fair: the majority of the above explanation link to wikipedia pages, but these are not much easier to understand than the answers themselves].
So my question is: could somebody please provide a translation in layman's terms of the formal explanations given above of why it is not possible to use regex for parsing (X)HTML/XML?
EDIT: After reading the first answer I thought that I should clarify: I am looking for a "translation" that also briefely explains the concepts it tries to translate: at the end of an answer, the reader should have a rough idea - for example - of what "regular language" and "context-free grammar" mean...

Concentrate on this one:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
The definition of regular expressions is equivalent to the fact that a test of whether a string matches the pattern can be performed by a finite automaton (one different automaton for each pattern). A finite automaton has no memory - no stack, no heap, no infinite tape to scribble on. All it has is a finite number of internal states, each of which can read a unit of input from the string being tested, and use that to decide which state to move to next. As special cases, it has two termination states: "yes, that matched", and "no, that didn't match".
HTML, on the other hand, has structures that can nest arbitrarily deep. To determine whether a file is valid HTML or not, you need to check that all the closing tags match a previous opening tag. To understand it, you need to know which element is being closed. Without any means to "remember" what opening tags you've seen, no chance.
Note however that most "regex" libraries actually permit more than just the strict definition of regular expressions. If they can match back-references, then they've gone beyond a regular language. So the reason why you shouldn't use a regex library on HTML is a little more complex than the simple fact that HTML is not regular.

The fact that HTML doesn't represent a regular language is a red herring. Regular expression and regular languages sound sort of similar, but are not - they do share the same origin, but there's a notable distance between the academic "regular languages" and the current matching power of engines. In fact, almost all modern regular expression engines support non-regular features - a simple example is (.*)\1. which uses backreferencing to match a repeated sequence of characters - for example 123123, or bonbon. Matching of recursive/balanced structures make these even more fun.
Wikipedia puts this nicely, in a quote by Larry Wall:
'Regular expressions' [...] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).
"Regular expression can only match regular languages", as you can see, is nothing more than a commonly stated fallacy.
So, why not then?
A good reason not to match HTML with regular expression is that "just because you can doesn't mean you should". While may be possible - there are simply better tools for the job. Considering:
Valid HTML is harder/more complex than you may think.
There are many types of "valid" HTML - what is valid in HTML, for example, isn't valid in XHTML.
Much of the free-form HTML found on the internet is not valid anyway. HTML libraries do a good job of dealing with these as well, and were tested for many of these common cases.
Very often it is impossible to match a part of the data without parsing it as a whole. For example, you might be looking for all titles, and end up matching inside a comment or a string literal. <h1>.*?</h1> may be a bold attempt at finding the main title, but it might find:
<!-- <h1>not the title!</h1> -->
Or even:
<script>
var s = "Certainly <h1>not the title!</h1>";
</script>
Last point is the most important:
Using a dedicated HTML parser is better than any regex you can come up with. Very often, XPath allows a better expressive way of finding the data you need, and using an HTML parser is much easier than most people realize.
A good summary of the subject, and an important comment on when mixing Regex and HTML may be appropriate, can be found in Jeff Atwood's blog: Parsing Html The Cthulhu Way.
When is it better to use a regular expression to parse HTML?
In most cases, it is better to use XPath on the DOM structure a library can give you. Still, against popular opinion, there are a few cases when I would strongly recommend using a regex and not a parser library:
Given a few of these conditions:
When you need a one-time update of your HTML files, and you know the structure is consistent.
When you have a very small snippet of HTML.
When you aren't dealing with an HTML file, but a similar templating engine (it can be very hard to find a parser in that case).
When you want to change parts of the HTML, but not all of it - a parser, to my knowledge, cannot answer this request: it will parse the whole document, and save a whole document, changing parts you never wanted to change.

Because HTML can have unlimited nesting of <tags><inside><tags and="<things><that><look></like></tags>"></inside></each></other> and regex can't really cope with that because it can't track a history of what it's descended into and come out of.
A simple construct that illustrates the difficulty:
<body><div id="foo">Hi there! <div id="bar">Bye!</div></div></body>
99.9% of generalized regex-based extraction routines will be unable to correctly give me everything inside the div with the ID foo, because they can't tell the closing tag for that div from the closing tag for the bar div. That is because they have no way of saying "okay, I've now descended into the second of two divs, so the next div close I see brings me back out one, and the one after that is the close tag for the first". Programmers typically respond by devising special-case regexes for the specific situation, which then break as soon as more tags are introduced inside foo and have to be unsnarled at tremendous cost in time and frustration. This is why people get mad about the whole thing.

A regular language is a language that can be matched by a finite state machine.
(Understanding Finite State machines, Push-down machines, and Turing machines is basically the curriculum of a fourth year college CS Course.)
Consider the following machine, which recognizes the string "hi".
(Start) --Read h-->(A)--Read i-->(Succeed)
\ \
\ -- read any other value-->(Fail)
-- read any other value-->(Fail)
This is a simple machine to recognize a regular language; Each expression in parenthesis is a state, and each arrow is a transition. Building a machine like this will allow you to test any input string against a regular language -- hence, a regular expression.
HTML requires you to know more than just what state you are in -- it requires a history of what you have seen before, to match tag nesting. You can accomplish this if you add a stack to the machine, but then it is no longer "regular". This is called a Push-down machine, and recognizes a grammar.

A regular expression is a machine with a finite (and typically rather small) number of discrete states.
To parse XML, C, or any other language with arbitrary nesting of language elements, you need to remember how deep you are. That is, you must be able to count braces/brackets/tags.
You cannot count with finite memory. There may be more brace levels than you have states! You might be able to parse a subset of your language that restricts the number of nesting levels, but it would be very tedious.

A grammar is a formal definition of where words can go. For example, adjectives preceed nouns in English grammar, but follow nouns en la gramática española.
Context-free means that the grammar works universally in all contexts. Context-sensitive means there are additional rules in certain contexts.
In C#, for example, using means something different in using System; at the top of files, than using (var sw = new StringWriter (...)). A more relevant example is the following code within code:
void Start ()
{
string myCode = #"
void Start()
{
Console.WriteLine (""x"");
}
";
}

There's another practical reason for not using regular expressions to parse XML and HTML that has nothing to do with the computer science theory at all: your regular expression will either be hideously complicated, or it will be wrong.
For example, it's all very well writing a regular expression to match
<price>10.65</price>
But if your code is to be correct, then:
It must allow whitespace after the element name in both start and end tag
If the document is in a namespace, then it should allow any namespace prefix to be used
It should probably allow and ignore any unknown attributes appearing in the start tag (depending on the semantics of the particular vocabulary)
It may need to allow whitespace before and after the decimal value (again, depending on the detailed rules of the particular XML vocabulary).
It should not match something that looks like an element, but is actually in a comment or CDATA section (this becomes especially important if there is a possibility of malicious data trying to fool your parser).
It may need to provide diagnostics if the input is invalid.
Of course some of this depends on the quality standards you are applying. We see a lot of problems on StackOverflow with people having to generate XML in a particular way (for example, with no whitespace in the tags) because it is being read by an application that requires it to be written in a particular way. If your code has any kind of longevity then it's important that it should be able to process incoming XML written in any way that the XML standard permits, and not just the one sample input document that you are testing your code on.

So others have gone and given brief definitions for most of these things, but I don't really think they cover WHY normal regex's are what they are.
There are some great resources on what a finite state machine is, but in short, a seminal paper in computer science proved that the basic grammar of regex's (the standard ones, used by grep, not the extended ones, like PCRE) can always be manipulated into a finite-state machine, meaning a 'machine' where you are always in a box, and have a limited number of ways to move to the next box. In short, you can always tell what the next 'thing' you need to do is just by looking at the current character. (And yes, even when it comes to things like 'match at least 4, but no more than 5 times', you can still create a machine like this) (I should note that note that the machine I describe here is technically only a subtype of finite-state machines, but it can implement any other subtype, so...)
This is great because you can always very efficiently evaluate such a machine, even for large inputs. Studying these sorts of questions (how does my algorithm behave when the number of things I feed it gets big) is called studying the computational complexity of the technique. If you're familiar with how a lot of calculus deals with how functions behave as they approach infinity, well, that's pretty much it.
So whats so great about a standard regular expression? Well, any given regex can match a string of length N in no more than O(N) time (meaning that doubling the length of your input doubles the time it takes: it says nothing about the speed for a given input) (of course, some are faster: the regex * could match in O(1), meaning constant, time). The reason is simple: remember, because the system has only a few paths from each state, you never 'go back', and you only need to check each character once. That means even if I pass you a 100 gigabyte file, you'll still be able to crunch through it pretty quickly: which is great!.
Now, its pretty clear why you can't use such a machine to parse arbitrary XML: you can have infinite tags-in-tags, and to parse correctly you need an infinite number of states. But, if you allow recursive replaces, a PCRE is Turing complete: so it could totally parse HTML! Even if you don't, a PCRE can parse any context-free grammar, including XML. So the answer is "yeah, you can". Now, it might take exponential time (you can't use our neat finite-state machine, so you need to use a big fancy parser that can rewind, which means that a crafted expression will take centuries on a big file), but still. Possible.
But lets talk real quick about why that's an awful idea. First of all, while you'll see a ton of people saying "omg, regex's are so powerful", the reality is... they aren't. What they are is simple. The language is dead simple: you only need to know a few meta-characters and their meanings, and you can understand (eventually) anything written in it. However, the issue is that those meta-characters are all you have. See, they can do a lot, but they're meant to express fairly simple things concisely, not to try and describe a complicated process.
And XML sure is complicated. It's pretty easy to find examples in some of the other answers: you can't match stuff inside comment fields, ect. Representing all of that in a programming language takes work: and that's with the benefits of variables and functions! PCRE's, for all their features, can't come close to that. Any hand-made implementation will be buggy: scanning blobs of meta-characters to check matching parenthesis is hard, and it's not like you can comment your code. It'd be easier to define a meta-language, and compile that down to a regex: and at that point, you might as well just take the language you wrote your meta-compiler with and write an XML parser. It'd be easier for you, faster to run, and just better overall.
For more neat info on this, check out this site. It does a great job of explaining all this stuff in layman's terms.

Don't parse XML/HTML with regex, use a proper XML/HTML parser and a powerful xpath query.
theory :
According to the compiling theory, XML/HTML can't be parsed using regex based on finite state machine. Due to hierarchical construction of XML/HTML you need to use a pushdown automaton and manipulate LALR grammar using tool like YACC.
realLife©®™ everyday tool in a shell :
You can use one of the following :
xmllint often installed by default with libxml2, xpath1 (check my wrapper to have newlines delimited output
xmlstarlet can edit, select, transform... Not installed by default, xpath1
xpath installed via perl's module XML::XPath, xpath1
xidel xpath3
saxon-lint my own project, wrapper over #Michael Kay's Saxon-HE Java library, xpath3
or you can use high level languages and proper libs, I think of :
python's lxml (from lxml import etree)
perl's XML::LibXML, XML::XPath, XML::Twig::XPath, HTML::TreeBuilder::XPath
ruby nokogiri, check this example
php DOMXpath, check this example
Check: Using regular expressions with HTML tags

In a purely theoretical sense, it is impossible for regular expressions to parse XML. They are defined in a way that allows them no memory of any previous state, thus preventing the correct matching of an arbitrary tag, and they cannot penetrate to an arbitrary depth of nesting, since the nesting would need to be built into the regular expression.
Modern regex parsers, however, are built for their utility to the developer, rather than their adherence to a precise definition. As such, we have things like back-references and recursion that make use of knowledge of previous states. Using these, it is remarkably simple to create a regex that can explore, validate, or parse XML.
Consider for example,
(?:
<!\-\-[\S\s]*?\-\->
|
<([\w\-\.]+)[^>]*?
(?:
\/>
|
>
(?:
[^<]
|
(?R)
)*
<\/\1>
)
)
This will find the next properly formed XML tag or comment, and it will only find it if it's entire contents are properly formed. (This expression has been tested using Notepad++, which uses Boost C++'s regex library, which closely approximates PCRE.)
Here's how it works:
The first chunk matches a comment. It's necessary for this to come first so that it will deal with any commented-out code that otherwise might cause hang ups.
If that doesn't match, it will look for the beginning of a tag. Note that it uses parentheses to capture the name.
This tag will either end in a />, thus completing the tag, or it will end with a >, in which case it will continue by examining the tag's contents.
It will continue parsing until it reaches a <, at which point it will recurse back to the beginning of the expression, allowing it to deal with either a comment or a new tag.
It will continue through the loop until it arrives at either the end of the text or at a < that it cannot parse. Failing to match will, of course, cause it to start the process over. Otherwise, the < is presumably the beginning of the closing tag for this iteration. Using the back-reference inside a closing tag <\/\1>, it will match the opening tag for the current iteration (depth). There's only one capturing group, so this match is a simple matter. This makes it independent of the names of the tags used, although you could modify the capturing group to capture only specific tags, if you need to.
At this point it will either kick out of the current recursion, up to the next level or end with a match.
This example solves problems dealing with whitespace or identifying relevant content through the use of character groups that merely negate < or >, or in the case of the comments, by using [\S\s], which will match anything, including carriage returns and new lines, even in single-line mode, continuing until it reaches a
-->. Hence, it simply treats everything as valid until it reaches something meaningful.
For most purposes, a regex like this isn't particularly useful. It will validate that XML is properly formed, but that's all it will really do, and it doesn't account for properties (although this would be an easy addition). It's only this simple because it leaves out real world issues like this, as well as definitions of tag names. Fitting it for real use would make it much more of a beast. In general, a true XML parser would be far superior. This one is probably best suited for teaching how recursion works.
Long story short: use an XML parser for real work, and use this if you want to play around with regexes.

Related

Regex pattern to match and replace a group of x whitespaces before a xml tag with a tab [duplicate]

There is no day on SO that passes without a question about parsing (X)HTML or XML with regular expressions being asked.
While it's relatively easy to come up with examples that demonstrates the non-viability of regexes for this task or with a collection of expressions to represent the concept, I could still not find on SO a formal explanation of why this is not possible done in layman's terms.
The only formal explanations I could find so far on this site are probably extremely accurate, but also quite cryptic to the self-taught programmer:
the flaw here is that HTML is a Chomsky Type 2 grammar (context free
grammar) and RegEx is a Chomsky Type 3 grammar (regular expression)
or:
Regular expressions can only match regular languages but HTML is a
context-free language.
or:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
or:
The Pumping lemma for regular languages is the reason why you can't do
that.
[To be fair: the majority of the above explanation link to wikipedia pages, but these are not much easier to understand than the answers themselves].
So my question is: could somebody please provide a translation in layman's terms of the formal explanations given above of why it is not possible to use regex for parsing (X)HTML/XML?
EDIT: After reading the first answer I thought that I should clarify: I am looking for a "translation" that also briefely explains the concepts it tries to translate: at the end of an answer, the reader should have a rough idea - for example - of what "regular language" and "context-free grammar" mean...
Concentrate on this one:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
The definition of regular expressions is equivalent to the fact that a test of whether a string matches the pattern can be performed by a finite automaton (one different automaton for each pattern). A finite automaton has no memory - no stack, no heap, no infinite tape to scribble on. All it has is a finite number of internal states, each of which can read a unit of input from the string being tested, and use that to decide which state to move to next. As special cases, it has two termination states: "yes, that matched", and "no, that didn't match".
HTML, on the other hand, has structures that can nest arbitrarily deep. To determine whether a file is valid HTML or not, you need to check that all the closing tags match a previous opening tag. To understand it, you need to know which element is being closed. Without any means to "remember" what opening tags you've seen, no chance.
Note however that most "regex" libraries actually permit more than just the strict definition of regular expressions. If they can match back-references, then they've gone beyond a regular language. So the reason why you shouldn't use a regex library on HTML is a little more complex than the simple fact that HTML is not regular.
The fact that HTML doesn't represent a regular language is a red herring. Regular expression and regular languages sound sort of similar, but are not - they do share the same origin, but there's a notable distance between the academic "regular languages" and the current matching power of engines. In fact, almost all modern regular expression engines support non-regular features - a simple example is (.*)\1. which uses backreferencing to match a repeated sequence of characters - for example 123123, or bonbon. Matching of recursive/balanced structures make these even more fun.
Wikipedia puts this nicely, in a quote by Larry Wall:
'Regular expressions' [...] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).
"Regular expression can only match regular languages", as you can see, is nothing more than a commonly stated fallacy.
So, why not then?
A good reason not to match HTML with regular expression is that "just because you can doesn't mean you should". While may be possible - there are simply better tools for the job. Considering:
Valid HTML is harder/more complex than you may think.
There are many types of "valid" HTML - what is valid in HTML, for example, isn't valid in XHTML.
Much of the free-form HTML found on the internet is not valid anyway. HTML libraries do a good job of dealing with these as well, and were tested for many of these common cases.
Very often it is impossible to match a part of the data without parsing it as a whole. For example, you might be looking for all titles, and end up matching inside a comment or a string literal. <h1>.*?</h1> may be a bold attempt at finding the main title, but it might find:
<!-- <h1>not the title!</h1> -->
Or even:
<script>
var s = "Certainly <h1>not the title!</h1>";
</script>
Last point is the most important:
Using a dedicated HTML parser is better than any regex you can come up with. Very often, XPath allows a better expressive way of finding the data you need, and using an HTML parser is much easier than most people realize.
A good summary of the subject, and an important comment on when mixing Regex and HTML may be appropriate, can be found in Jeff Atwood's blog: Parsing Html The Cthulhu Way.
When is it better to use a regular expression to parse HTML?
In most cases, it is better to use XPath on the DOM structure a library can give you. Still, against popular opinion, there are a few cases when I would strongly recommend using a regex and not a parser library:
Given a few of these conditions:
When you need a one-time update of your HTML files, and you know the structure is consistent.
When you have a very small snippet of HTML.
When you aren't dealing with an HTML file, but a similar templating engine (it can be very hard to find a parser in that case).
When you want to change parts of the HTML, but not all of it - a parser, to my knowledge, cannot answer this request: it will parse the whole document, and save a whole document, changing parts you never wanted to change.
Because HTML can have unlimited nesting of <tags><inside><tags and="<things><that><look></like></tags>"></inside></each></other> and regex can't really cope with that because it can't track a history of what it's descended into and come out of.
A simple construct that illustrates the difficulty:
<body><div id="foo">Hi there! <div id="bar">Bye!</div></div></body>
99.9% of generalized regex-based extraction routines will be unable to correctly give me everything inside the div with the ID foo, because they can't tell the closing tag for that div from the closing tag for the bar div. That is because they have no way of saying "okay, I've now descended into the second of two divs, so the next div close I see brings me back out one, and the one after that is the close tag for the first". Programmers typically respond by devising special-case regexes for the specific situation, which then break as soon as more tags are introduced inside foo and have to be unsnarled at tremendous cost in time and frustration. This is why people get mad about the whole thing.
A regular language is a language that can be matched by a finite state machine.
(Understanding Finite State machines, Push-down machines, and Turing machines is basically the curriculum of a fourth year college CS Course.)
Consider the following machine, which recognizes the string "hi".
(Start) --Read h-->(A)--Read i-->(Succeed)
\ \
\ -- read any other value-->(Fail)
-- read any other value-->(Fail)
This is a simple machine to recognize a regular language; Each expression in parenthesis is a state, and each arrow is a transition. Building a machine like this will allow you to test any input string against a regular language -- hence, a regular expression.
HTML requires you to know more than just what state you are in -- it requires a history of what you have seen before, to match tag nesting. You can accomplish this if you add a stack to the machine, but then it is no longer "regular". This is called a Push-down machine, and recognizes a grammar.
A regular expression is a machine with a finite (and typically rather small) number of discrete states.
To parse XML, C, or any other language with arbitrary nesting of language elements, you need to remember how deep you are. That is, you must be able to count braces/brackets/tags.
You cannot count with finite memory. There may be more brace levels than you have states! You might be able to parse a subset of your language that restricts the number of nesting levels, but it would be very tedious.
A grammar is a formal definition of where words can go. For example, adjectives preceed nouns in English grammar, but follow nouns en la gramática española.
Context-free means that the grammar works universally in all contexts. Context-sensitive means there are additional rules in certain contexts.
In C#, for example, using means something different in using System; at the top of files, than using (var sw = new StringWriter (...)). A more relevant example is the following code within code:
void Start ()
{
string myCode = #"
void Start()
{
Console.WriteLine (""x"");
}
";
}
There's another practical reason for not using regular expressions to parse XML and HTML that has nothing to do with the computer science theory at all: your regular expression will either be hideously complicated, or it will be wrong.
For example, it's all very well writing a regular expression to match
<price>10.65</price>
But if your code is to be correct, then:
It must allow whitespace after the element name in both start and end tag
If the document is in a namespace, then it should allow any namespace prefix to be used
It should probably allow and ignore any unknown attributes appearing in the start tag (depending on the semantics of the particular vocabulary)
It may need to allow whitespace before and after the decimal value (again, depending on the detailed rules of the particular XML vocabulary).
It should not match something that looks like an element, but is actually in a comment or CDATA section (this becomes especially important if there is a possibility of malicious data trying to fool your parser).
It may need to provide diagnostics if the input is invalid.
Of course some of this depends on the quality standards you are applying. We see a lot of problems on StackOverflow with people having to generate XML in a particular way (for example, with no whitespace in the tags) because it is being read by an application that requires it to be written in a particular way. If your code has any kind of longevity then it's important that it should be able to process incoming XML written in any way that the XML standard permits, and not just the one sample input document that you are testing your code on.
So others have gone and given brief definitions for most of these things, but I don't really think they cover WHY normal regex's are what they are.
There are some great resources on what a finite state machine is, but in short, a seminal paper in computer science proved that the basic grammar of regex's (the standard ones, used by grep, not the extended ones, like PCRE) can always be manipulated into a finite-state machine, meaning a 'machine' where you are always in a box, and have a limited number of ways to move to the next box. In short, you can always tell what the next 'thing' you need to do is just by looking at the current character. (And yes, even when it comes to things like 'match at least 4, but no more than 5 times', you can still create a machine like this) (I should note that note that the machine I describe here is technically only a subtype of finite-state machines, but it can implement any other subtype, so...)
This is great because you can always very efficiently evaluate such a machine, even for large inputs. Studying these sorts of questions (how does my algorithm behave when the number of things I feed it gets big) is called studying the computational complexity of the technique. If you're familiar with how a lot of calculus deals with how functions behave as they approach infinity, well, that's pretty much it.
So whats so great about a standard regular expression? Well, any given regex can match a string of length N in no more than O(N) time (meaning that doubling the length of your input doubles the time it takes: it says nothing about the speed for a given input) (of course, some are faster: the regex * could match in O(1), meaning constant, time). The reason is simple: remember, because the system has only a few paths from each state, you never 'go back', and you only need to check each character once. That means even if I pass you a 100 gigabyte file, you'll still be able to crunch through it pretty quickly: which is great!.
Now, its pretty clear why you can't use such a machine to parse arbitrary XML: you can have infinite tags-in-tags, and to parse correctly you need an infinite number of states. But, if you allow recursive replaces, a PCRE is Turing complete: so it could totally parse HTML! Even if you don't, a PCRE can parse any context-free grammar, including XML. So the answer is "yeah, you can". Now, it might take exponential time (you can't use our neat finite-state machine, so you need to use a big fancy parser that can rewind, which means that a crafted expression will take centuries on a big file), but still. Possible.
But lets talk real quick about why that's an awful idea. First of all, while you'll see a ton of people saying "omg, regex's are so powerful", the reality is... they aren't. What they are is simple. The language is dead simple: you only need to know a few meta-characters and their meanings, and you can understand (eventually) anything written in it. However, the issue is that those meta-characters are all you have. See, they can do a lot, but they're meant to express fairly simple things concisely, not to try and describe a complicated process.
And XML sure is complicated. It's pretty easy to find examples in some of the other answers: you can't match stuff inside comment fields, ect. Representing all of that in a programming language takes work: and that's with the benefits of variables and functions! PCRE's, for all their features, can't come close to that. Any hand-made implementation will be buggy: scanning blobs of meta-characters to check matching parenthesis is hard, and it's not like you can comment your code. It'd be easier to define a meta-language, and compile that down to a regex: and at that point, you might as well just take the language you wrote your meta-compiler with and write an XML parser. It'd be easier for you, faster to run, and just better overall.
For more neat info on this, check out this site. It does a great job of explaining all this stuff in layman's terms.
Don't parse XML/HTML with regex, use a proper XML/HTML parser and a powerful xpath query.
theory :
According to the compiling theory, XML/HTML can't be parsed using regex based on finite state machine. Due to hierarchical construction of XML/HTML you need to use a pushdown automaton and manipulate LALR grammar using tool like YACC.
realLife©®™ everyday tool in a shell :
You can use one of the following :
xmllint often installed by default with libxml2, xpath1 (check my wrapper to have newlines delimited output
xmlstarlet can edit, select, transform... Not installed by default, xpath1
xpath installed via perl's module XML::XPath, xpath1
xidel xpath3
saxon-lint my own project, wrapper over #Michael Kay's Saxon-HE Java library, xpath3
or you can use high level languages and proper libs, I think of :
python's lxml (from lxml import etree)
perl's XML::LibXML, XML::XPath, XML::Twig::XPath, HTML::TreeBuilder::XPath
ruby nokogiri, check this example
php DOMXpath, check this example
Check: Using regular expressions with HTML tags
In a purely theoretical sense, it is impossible for regular expressions to parse XML. They are defined in a way that allows them no memory of any previous state, thus preventing the correct matching of an arbitrary tag, and they cannot penetrate to an arbitrary depth of nesting, since the nesting would need to be built into the regular expression.
Modern regex parsers, however, are built for their utility to the developer, rather than their adherence to a precise definition. As such, we have things like back-references and recursion that make use of knowledge of previous states. Using these, it is remarkably simple to create a regex that can explore, validate, or parse XML.
Consider for example,
(?:
<!\-\-[\S\s]*?\-\->
|
<([\w\-\.]+)[^>]*?
(?:
\/>
|
>
(?:
[^<]
|
(?R)
)*
<\/\1>
)
)
This will find the next properly formed XML tag or comment, and it will only find it if it's entire contents are properly formed. (This expression has been tested using Notepad++, which uses Boost C++'s regex library, which closely approximates PCRE.)
Here's how it works:
The first chunk matches a comment. It's necessary for this to come first so that it will deal with any commented-out code that otherwise might cause hang ups.
If that doesn't match, it will look for the beginning of a tag. Note that it uses parentheses to capture the name.
This tag will either end in a />, thus completing the tag, or it will end with a >, in which case it will continue by examining the tag's contents.
It will continue parsing until it reaches a <, at which point it will recurse back to the beginning of the expression, allowing it to deal with either a comment or a new tag.
It will continue through the loop until it arrives at either the end of the text or at a < that it cannot parse. Failing to match will, of course, cause it to start the process over. Otherwise, the < is presumably the beginning of the closing tag for this iteration. Using the back-reference inside a closing tag <\/\1>, it will match the opening tag for the current iteration (depth). There's only one capturing group, so this match is a simple matter. This makes it independent of the names of the tags used, although you could modify the capturing group to capture only specific tags, if you need to.
At this point it will either kick out of the current recursion, up to the next level or end with a match.
This example solves problems dealing with whitespace or identifying relevant content through the use of character groups that merely negate < or >, or in the case of the comments, by using [\S\s], which will match anything, including carriage returns and new lines, even in single-line mode, continuing until it reaches a
-->. Hence, it simply treats everything as valid until it reaches something meaningful.
For most purposes, a regex like this isn't particularly useful. It will validate that XML is properly formed, but that's all it will really do, and it doesn't account for properties (although this would be an easy addition). It's only this simple because it leaves out real world issues like this, as well as definitions of tag names. Fitting it for real use would make it much more of a beast. In general, a true XML parser would be far superior. This one is probably best suited for teaching how recursion works.
Long story short: use an XML parser for real work, and use this if you want to play around with regexes.

Java : Matcher.find using high cpu utilization

I am using mod security rules https://github.com/SpiderLabs/owasp-modsecurity-crs to sanitize user input data. I am facing cpu shoot up and delay in matching the user input with mod security rule regular expressions. Overall it contains 500+ regular expression to check different types of attacks(xss , badrobots , generic and sql). For each request , I go through all parameters and check against all these 500 regular expressions. I am using Matcher.find to check the parameters. In this case some parameters fall in infinite looping , I tackled this using below technique.
Cancelling a long running regex match?.
Sanitize a user request took around ~500 ms and cpu % shoots up. I analyzed using visualvm.java.net with my test suite runner.
Cpu Profile Output
Please help me to reduce the cpu usage % and load average?
If possible, compile your regexes once and keep them around, rather than repeatedly (implicitly) compiling (especially inside a loop).
See java.util.regex - importance of Pattern.compile()? for more info.
I suggest you look at this paper:
"Towards Faster String Matching for Intrusion Detection or Exceeding the Speed of Snort"
There are better ways to do the matching you describe. Essentially you take the 500 patterns you want to match and compile it into a single suffix tree which can very efficiently match an input against all the rules at once.
The paper explains that this approach was described as "Boyer-Moore Approach to Exact Set Matching" by Dan Gusfield.
Boyer-Moore is a well known algorithm for String matching. The paper describes a variation on Boyer-Moore for Set Matching.
I think this is the root of your problem, not the regex performance per-se:
For each request , I go through all parameters and check against all these 500 regular expressions
No matter how fast your regex will be, this is still plenty of work. I don't know how many parameters you have, but even if there are only a few, that's still checking thousands of regular expressions per request. That can kill your CPU.
Apart from the obvious things like improving your regex performance by precompiling and/or simplifying them, you can do the following things to reduce the amount of regex checking:
Use positive-validation of user input based on the parameter type. E.g. if some parameter must be a simple number, don't waste time checking if it contains malicious XML script. Just check whether it matches [0-9]+ (or something similarly simple). If it does, it is ok - skip checking all the 500 regexps.
Try to find simple regexps that could eliminate the whole classes of attacks - find common things in your regexps. If e.g. you've got 100 regexps checking for existence of certain HTML tags, check if the content contains at least one HTML tag first. If it doesn't, you immediately save on checking 100 regexps.
Cache results. Many parameters generated in webapps repeat themselves. Don't check the same content over and over again, but just remember the final validation result. Beware to limit the maximum size of the cache to avoid DOS attacks.
Also note that negative-validation is usually easy to bypass. Someone just changes a few chars in their malicious code and your regexps won't match. You'll have to grow your "database" of regexps in order to protect against new attacks. Positive validation (whitelisting) doesn't have this disadvantage and is much more effective.
Avoid expresions with:
Multiline
case insensitive
etc.
Perhaps you can consider grouping regular expressions and apply a given group of regular expresions depending on user input.
If you have such a big number of regex, you could group (at least some of) them using a trie algorithm (http://en.wikipedia.org/wiki/Trie).
The idea is that if you have for example regexes like /abc[0-9-]/, /abde/, /another example/, /.something else/ and /.I run out of ideas/, you can combine them into the single regex
/a(?:b(?:c[0-9-]|de)|nother example)|.(?:I run out of ideas|something else)/
In this way, the matcher has to run only once instead of four times, and you avoid a lot of backtracking, because of how the common starting parts have been written in the regex above.
There must be a subset of problematic regexes among these 500. I.e. such a regex
String s = "AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAB";
Pattern.compile("(A+)+").matcher(s).matches();
will take years to complete.
So in your case I would log all the problematic regexes with their problematic inputs. Once these are found you could manually rewrite these few problematic regexes and test them versus the original. Regexes can always be rewritten with a simpler and more readable java functions.
Another option, though it would not resolve the problem above, is you could also utilize a faster (x20 in some cases) and more limited regex library. It is available in Maven Central.

Should we use regular expression in Java?

I know regular expressions are very powerful, and to become an expert with them is not easy.
One of my colleagues once wrote a java class to parse formatted text files. Unfortunately it caused a StackOverFlowError in the first integration test. It seems difficault to find the bug, before another colleague from structural programming world came over and fixed it quickly by thowing away all regular expressions and instead using many nested conditional statements and many split and trim methods, and it works very well!
Well, why do we need regular expression in a programming language like Java? As far as I know, the only necessary usage of regular expression is the find/replace function in text editors.
Like everything else: Use with care and KISS
I use regexes quite often, but I don't go over the top and write a 100 character regex, because I know that I (personally) won't understand it later... in fact I think my limit is about 30-40 characters, something larger than that makes me spend too much time scratching my head.
Anything that can be expressed as a regular expression can, by definition, be expressed as a chain of IFs. You use REGEX basically for two reasons:
REGEX libraries tends to have optimized implementation that most of the time will be better than a hand-coded "IF" chain for some expressions.
REGEX are usually easier to follow, if properly written, than the IF chains. Specially for more complex expressions.
If your expression gets too complex, the use the advice given by this answer. If it get truly nasty, think about learning how to use a parser generator like ANTLR or JavaCC. A simple grammar usually can replace a regex, and it is a lot easier to maintain.
So the multiple nested conditional statements with many split and trim methods are easier for you to debug than a single line or two with regular expressions?
My preference is regular expressions because once you learn them, they are far more maintainable and far easier to read than parsing huge nested if loops.
If you find that a regular expression would get too complex and unmaintable, use code instead. Regular expressions can get very complex even for things that sound very simple at first. For example validation of dates in the format mm/dd/yy[yy] is as "simple" as:
^(((((((0?[13578])|(1[02]))[\.\-/]?((0?[1-9])|([12]\d)|(3[01])))|(((0?[469])|(11))[\.\-/]?((0?[1-9])|([12]\d)|(30)))|((0?2)[\.\-/]?((0?[1-9])|(1\d)|(2[0-8]))))[\.\-/]?(((19)|(20))?([\d][\d]))))|((0?2)[\.\-/]?(29)[\.\-/]?(((19)|(20))?(([02468][048])|([13579][26])))))$
Nobody can maintain that. Manually parsing the date will need more code but can be much more readable and maintainable.
Regular expressions are very powerful and useful for matching TEXT patterns, but are bad for validation with numeric parts like dates.
As always, you should use the best tool for the job. I would define the "best tool" by the most simple, understandable, effective method that fulfills the requirements.
Often regexes will simplify code and make it more readable. But this is not always the case.
Also, I would not jump to conclusions that regexes caused the StackOverflowError.
Regular expressions are a tool (like many others). You should use it when the work to be done could best be done with that tool. To know which tool to use, it helps ask a question like "When could I use regular expressions?". And of course it will become easier to decide which tool to use when you have many different tools in your toolbox and you know them fairly well.
You can use regex cleverly by spliting those into smaller chunks, something like,
final String REGEX_SOMETHING = "something";
final String REGEX_WHATEVER = "whatever";
..
String REGEX_COMPLETE = REGEX_SOMETHING + REGEX_WHATEVER + ...
Regular expressions can be easier to read, but they can also be too complicated. It depends on the format of data you want to match.
The Java RE implementation still has some quirks, with the effect that some quite simple expressions (like '((?:[^'\\]|\\.)*)') cause a stack overflow when matching longer strings. So make sure you test with real life data (and more extreme examples, too) - or use a regex engine with a different implementation (there are several ones, also as Java libraries).
Regular expression is very powerful in looking for patterns in the content. You can certainly avoid using regular expression and rely on the conditional statements, but you will soon notice that it takes many lines of code to accomplish the same task. Using too many nested conditional statements increases the cyclomatic complexity of your code, as a result, it becomes even more difficult to test because there are too many branches to test. Further, it also makes the code difficult to read and understand.
Granted, your colleague should have written testcases to test his regular expressions first.
There's no right or wrong answer here. If the task is simple, then there's no need to use regular expression. Otherwise, it is nice to sprinkle a little regular expressions here and there to make your code easy to read.

When would it be worth using RegEx in Java?

I'm writing a small app that reads some input and do something based on that input.
Currently I'm looking for a line that ends with, say, "magic", I would use String's endsWith method. It's pretty clear to whoever reads my code what's going on.
Another way to do it is create a Pattern and try to match a line that ends with "magic". This is also clear, but I personally think this is an overkill because the pattern I'm looking for is not complex at all.
When do you think it's worth using RegEx Java? If it's complexity, how would you personally define what's complex enough?
Also, are there times when using Patterns are actually faster than string manipulation?
EDIT: I'm using Java 6.
Basically: if there is a non-regex operation that does what you want in one step, always go for that.
This is not so much about performance, but about a) readability and b) compile-time-safety. Specialized non-regex versions are usually a lot easier to read than regex-versions. And a typo in one of these specialized methods will not compile, while a typo in a Regex will fail miserably at runtime.
Comparing Regex-based solutions to non-Regex-bases solutions
String s = "Magic_Carpet_Ride";
s.startsWith("Magic"); // non-regex
s.matches("Magic.*"); // regex
s.contains("Carpet"); // non-regex
s.matches(".*Carpet.*"); // regex
s.endsWith("Ride"); // non-regex
s.matches(".*Ride"); // regex
In all these cases it's a No-brainer: use the non-regex version.
But when things get a bit more complicated, it depends. I guess I'd still stick with non-regex in the following case, but many wouldn't:
// Test whether a string ends with "magic" in any case,
// followed by optional white space
s.toLowerCase().trim().endsWith("magic"); // non-regex, 3 calls
s.matches(".*(?i:magic)\\s*"); // regex, 1 call, but ugly
And in response to RegexesCanCertainlyBeEasierToReadThanMultipleFunctionCallsToDoTheSameThing:
I still think the non-regex version is more readable, but I would write it like this:
s.toLowerCase()
.trim()
.endsWith("magic");
Makes the whole difference, doesn't it?
You would use Regex when the normal manipulations on the String class are not enough to elegantly get what you need from the String.
A good indicator that this is the case is when you start splitting, then splitting those results, then splitting those results. The code is getting unwieldy. Two lines of Pattern/Regex code can clean this up, neatly wrapped in a method that is unit tested....
Anything that can be done with regex can also be hand-coded.
Use regex if:
Doing it manually is going to take more effort without much benefit.
You can easily come up with a regex for your task.
Don't use regex if:
It's very easy to do it otherwise, as in your example.
The string you're parsing does not lend itself to regex. (it is customary to link to this question)
I think you are best with using endsWith. Unless your requirements change, it's simpler and easier to understand. Might perform faster too.
If there was a bit more complexity, such as you wanted to match "magic", "majik', but not "Magic" or "Majik"; or you wanted to match "magic" followed by a space and then 1 word such as "... magic spoon" but not "...magic soup spoon", then I think RegEx would be a better way to go.
Any complex parsing where you are generating a lot of Objects would be better done with RegEx when you factor in both computing power, and brainpower it takes to generate the code for that purpose. If you have a RegEx guru handy, it's almost always worthwhile as the patterns can easily be tweaked to accommodate for business rule changes without major loop refactoring which would likely be needed if you used pure java to do some of the complex things RegEx does.
If your basic line ending is the same everytime, such as with "magic", then you are better of using endsWith.
However, if you have a line that has the same base, but can have multiple values, such as:
<string> <number> <string> <string> <number>
where the strings and numbers can be anything, you're better of using RegEx.
Your lines are always ending with a string, but you don't know what that string is.
If it's as simple as endsWith, startsWith or contains, then you should use these functions. If you are processing more "complex" strings and you want to extract information from these strings, then regexp/matchers can be used.
If you have something like "commandToRetrieve someNumericArgs someStringArgs someOptionalArgs" then regexp will ease your task a lot :)
I'd never use regexes in java if I have an easier way to do it, like in this case the endsWith method. Regexes in java are as ugly as they get, probably with the only exception of the match method on String.
Usually avoiding regexes makes your core more readable and easier for other programmers. The opposite is true, complex regexes might confuse even the most experience hackers out there.
As for performance concerns: just profile. Specially in java.
If you are familiar with how regexp works you will soon find that a lot of problems are easily solved by using regexp.
Personally I look to using java String operations if that is easy, but if you start splitting strings and doing substring on those again, I'd start thinking in regular expressions.
And again, if you use regular expressions, why stop at lines. By configuring your regexp you can easily read entire files in one regular expression (Pattern.DOTALL as parameter to the Pattern.compile and your regexp don't end in the newlines). I'd combine this with Apache Commons IOUtils.toString() methods and you got something very powerful to do quick stuff with.
I would even bring out a regular expression to parse some xml if needed. (For instance in a unit test, where I want to check that some elements are present in the xml).
For instance, from some unit test of mine:
Pattern pattern = Pattern.compile(
"<Monitor caption=\"(.+?)\".*?category=\"(.+?)\".*?>"
+ ".*?<Summary.*?>.+?</Summary>"
+ ".*?<Configuration.*?>(.+?)</Configuration>"
+ ".*?<CfgData.*?>(.+?)</CfgData>", Pattern.DOTALL);
which will match all segments in this xml and pick out some segments that I want to do some sub matching on.
I would suggest using a regular expression when you know the format of an input but you are not necessarily sure on the value (or possible value(s)) of the formatted input.
What I'm saying, if you have an input all ending with, in your case, "magic" then String.endsWith() works fine (seeing you know that your possible input value will end with "magic").
If you have a format e.g a RFC 5322 message format, one cannot clearly say that all email address can end with a .com, hence you can create a regular expression that conforms to the RFC 5322 standard for verification.
In a nutshell, if you know a format structure of your input data but don't know exactly what values (or possible values) you can receive, use regular expressions for validation.
There's a saying that goes:
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. (link).
For a simple test, I'd proceed exactly like you've done. If you find that it's getting more complicated, then I'd consider Regular Expressions only if there isn't another way.

String manipulation vs Regexps

We are often told that Regexps are slow and should be avoided whenever possible.
However, taking into account the overhead of doing some string manipulation oneself (not talking about algorithm mistakes - this is a different matter), especially in PHP or Perl (maybe Java) what is the limit, in which case can we consider string manipulation to be a better alternative? What regexps are particularly CPU greedy?
For instance, for the following, in C++, Java, PHP or Perl, what would you recommend
The regexps would probably be faster:
s/abc/def/g or a ... while((i=index("abc",$x)>=0) ...$y .= substr()... based solution?
s/(\d)+/N/g or a scanning algorithm
But what about
an email validation regexp?
s/((0|\w)+?[xy]*[^xy]){2,7}/u/g
wouldn't a handmade and specific algorithm be faster (while longer to write)?
edit
The point of the question is to determine what kind of regexp would better be rewritten specifically for a given problem via string manipulation?
edit2
A common implementation is Perl regexp. For instance in Perl - that requires to know how they are implemented - what kind of regexp is to be avoided, because the implementation will make the process lengthy and ineffective? It may not be a complex regexp...
edit July 2011 (based on comments)
I'm not saying all regexps are slow. Some particular regexps patterns are known to be slow, due to the particular processing their and due to their implementation.In recent Perl / PHP implementations for instance, what is known to be rather slow - and should be avoided?The answer is expected from people who did already their own research (profiler...) and who are able to provide a kind of general guidelines about what is recommended/to be avoided.
Who said regexes were slow? At least in Perl they tend to be the preferred method of manipulating strings.
Regexes are bad at some things like email validation because the subject is too complex, not because they are slow. A proper email validation regex is over 6,000 characters long, and it doesn't even handle all of the cases (you have to strip out comments first).
At least in Perl 5, if it has a grammar it probably shouldn't be parsed with one regex.
You should also rewrite a regex as a custom function if the regex has grown to the point it can no longer be easily maintained (see the previous email validation example) or profiling shows that the regex is the slow component of your code.
You seem to be concerned with the speed of the regex vs the custom algorithm, but that is not a valid concern until you prove that it is with a profiler. Write the code in the most maintainable way. If a regex is clear, then use a regex. If a custom algorithm is clear, then use a custom algorithm. If you find that either is eating up a lot of time after profiling your code, then start looking for alternatives.
A nice feature of manipulating text with regular expressions is that patterns are high-level and declarative. This leaves the implementation considerable room for optimization such as factoring out the longest common prefix or using Boyer-Moore for static strings. Concise notation makes for quicker reading by experts. I understand immediately what
if (s/^(.)//) {
...
}
is doing, and index($_, 0, 1) = "" looks noisy in comparison.
Rather than the lower bound, the important consideration for regular expressions is the upper bound. It's a powerful tool, so people believe it's capable of correctly extracting tokens from XML, email addresses, or C++ programs and don't realize that an even more powerful tool such as a parser is necessary.
Regular expressions will never be faster than a hand-made algorithm for a very specific purpose. Worse, in PHP they have to be compiled the first time they're used (a cache is used afterwards).
However, they are certainly more succinct. Moreover, writing custom algorithms is often slower than regexes because the regular expressions engine are usually implemented in a more low level language, with less overhead in calling functions, etc.
For instance, preg_replace('/a/', 'b', $string) will almost certainly be faster than looping in PHP through the string. But this is a constant penalty in execution time and sometimes regular expressions, due to backtracking, can have a much worse asymptotic behavior.
You are strongly encouraged to know how regular expressions are implemented so that you can know when you're writing inefficient ones.
Some regular expressions are extremely fast and the difference between the regex and a custom solution may be negligible (or not worth anyone's time to bother).
The cases where regular expressions are slow, however, is when excessive backtracking occurs. Regular expressions parse from left to right and have the potential to match text in more than one way. So if they reach a point where the engine realizes that the pattern isn't going to match your test string, then it may start over and try to match in another way. This repeated backtracking adds up and slows down the algorithm.
Often the regular expression can be rewritten to perform better. But the ultimate in performance would be to write your own optimized parser for the specific task. By writing your own parser you can for example parse from left to right while maintaining a memory (or state). If you use this technique in procedural code you can often achieve the effect you're looking for in one pass and without the slowness of backtracking.
I was faced with this decision earlier this year. In fact the task at hand was on the outer fringe of what was even possible with regular expressions. In the end I decided to write my own parser, an embedded pushdown automaton, which is incredibly efficient for what I was trying to do. The task, by the way, was to build something that can parse regular expressions and provide Intellisense-like code hinting for them.
It's somewhat ironic that I didn't use regular expressions to parse regular expressions, but you can read about the thought behind it all here...
http://blog.regexhero.net/2010/03/code-hinting-for-regular-expressions.html
what kind of regexp would better be rewritten specifically for a given problem via string manipulation?
Easy.
Determine if you ever need to rewrite anything.
(positive answer would be for about 1 per 10000 scripts, massive text parsing, resource critical)
Do profile possible solutions.
Use one suits you for a given problem
As for the rest 9999 cases do not waste your time with such a trifle problem and use whatever you like more.
Every time you ask yourself such a question, it is extremely useful to remind yourself that by default all your extra-optimized and super-fast code being parsed char by char on every user request. No brain-cracking regexps, no devious string manipulation, but just old good picking chars one by one.
Regexes aren't slow. But implementation can be slow, mostly because it is often interpreted and build again each time when they are used. But good regexp library allows you to use compiled versions. They are pretty fast.

Categories