Related
Reading the Java Code Conventions document from 1997, I saw this in an example on P16 about variable naming conventions:
int i;
char *cp;
float myWidth;
The second declaration is of interest - to me it looks a lot like how you might declare a pointer in C. It gives a syntax error when compiling under Java 8.
Just out of curiosity: was this ever valid syntax? If so, what did it mean?
It's a copy-paste error, I suppose.
From JLS 1 (which is really not that easy to find!), the section on local variable declarations states that such a declaration, in essence, is a type followed by an identifier. Note that there is no special reference made about *, but there is special reference made about [] (for arrays).
char is our type, so the only possibility that remains is that *cp is an identifier. The section on Identifiers states
An identifier is an unlimited-length sequence of Java letters and Java
digits, the first of which must be a Java letter.
...
A Java letter is a character for which the method Character.isJavaLetter (§20.5.17) returns true
And the JavaDoc for that method states:
A character is considered to be a Java letter if and only if it is a
letter (§20.5.15) or is the dollar sign character '$' (\u0024) or the
underscore ("low line") character '_' (\u005F).
so foo, _foo and $foo were fine, but *foo was never valid.
If you want a more up-to-date Java style guide, Google's style guide is the arguably the most commonly referenced.
It appears that this is a generic coding style document for C-like languages with some Java-specific additions. See, for example, also the next page:
Do not use the assignment operator in a place where it can be easily confused with the equality operator. Example:
if (c++ = d++) { // AVOID! Java disallows.
…
}
It does not make sense to tell a programmer to avoid something that is a syntax error anyway, so the only conclusion we can draw from this is that the document is not 100% Java-specific.
Another possibility is that it was meant as a coding style for the entire Java system, including the C++ parts of the JRE and JDK.
Note that Sun abandoned the coding style document even long before Oracle came into the picture. They restrained themselves to specifying what the language is, not how to use it.
Invalid syntax!
It's just a copy/paste mistake.
The Token (*) in variables is applicable only in C because it uses pointers whereas JAVA never uses pointers.
And Token (*) is used only as operator in JAVA.
There is no day on SO that passes without a question about parsing (X)HTML or XML with regular expressions being asked.
While it's relatively easy to come up with examples that demonstrates the non-viability of regexes for this task or with a collection of expressions to represent the concept, I could still not find on SO a formal explanation of why this is not possible done in layman's terms.
The only formal explanations I could find so far on this site are probably extremely accurate, but also quite cryptic to the self-taught programmer:
the flaw here is that HTML is a Chomsky Type 2 grammar (context free
grammar) and RegEx is a Chomsky Type 3 grammar (regular expression)
or:
Regular expressions can only match regular languages but HTML is a
context-free language.
or:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
or:
The Pumping lemma for regular languages is the reason why you can't do
that.
[To be fair: the majority of the above explanation link to wikipedia pages, but these are not much easier to understand than the answers themselves].
So my question is: could somebody please provide a translation in layman's terms of the formal explanations given above of why it is not possible to use regex for parsing (X)HTML/XML?
EDIT: After reading the first answer I thought that I should clarify: I am looking for a "translation" that also briefely explains the concepts it tries to translate: at the end of an answer, the reader should have a rough idea - for example - of what "regular language" and "context-free grammar" mean...
Concentrate on this one:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
The definition of regular expressions is equivalent to the fact that a test of whether a string matches the pattern can be performed by a finite automaton (one different automaton for each pattern). A finite automaton has no memory - no stack, no heap, no infinite tape to scribble on. All it has is a finite number of internal states, each of which can read a unit of input from the string being tested, and use that to decide which state to move to next. As special cases, it has two termination states: "yes, that matched", and "no, that didn't match".
HTML, on the other hand, has structures that can nest arbitrarily deep. To determine whether a file is valid HTML or not, you need to check that all the closing tags match a previous opening tag. To understand it, you need to know which element is being closed. Without any means to "remember" what opening tags you've seen, no chance.
Note however that most "regex" libraries actually permit more than just the strict definition of regular expressions. If they can match back-references, then they've gone beyond a regular language. So the reason why you shouldn't use a regex library on HTML is a little more complex than the simple fact that HTML is not regular.
The fact that HTML doesn't represent a regular language is a red herring. Regular expression and regular languages sound sort of similar, but are not - they do share the same origin, but there's a notable distance between the academic "regular languages" and the current matching power of engines. In fact, almost all modern regular expression engines support non-regular features - a simple example is (.*)\1. which uses backreferencing to match a repeated sequence of characters - for example 123123, or bonbon. Matching of recursive/balanced structures make these even more fun.
Wikipedia puts this nicely, in a quote by Larry Wall:
'Regular expressions' [...] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).
"Regular expression can only match regular languages", as you can see, is nothing more than a commonly stated fallacy.
So, why not then?
A good reason not to match HTML with regular expression is that "just because you can doesn't mean you should". While may be possible - there are simply better tools for the job. Considering:
Valid HTML is harder/more complex than you may think.
There are many types of "valid" HTML - what is valid in HTML, for example, isn't valid in XHTML.
Much of the free-form HTML found on the internet is not valid anyway. HTML libraries do a good job of dealing with these as well, and were tested for many of these common cases.
Very often it is impossible to match a part of the data without parsing it as a whole. For example, you might be looking for all titles, and end up matching inside a comment or a string literal. <h1>.*?</h1> may be a bold attempt at finding the main title, but it might find:
<!-- <h1>not the title!</h1> -->
Or even:
<script>
var s = "Certainly <h1>not the title!</h1>";
</script>
Last point is the most important:
Using a dedicated HTML parser is better than any regex you can come up with. Very often, XPath allows a better expressive way of finding the data you need, and using an HTML parser is much easier than most people realize.
A good summary of the subject, and an important comment on when mixing Regex and HTML may be appropriate, can be found in Jeff Atwood's blog: Parsing Html The Cthulhu Way.
When is it better to use a regular expression to parse HTML?
In most cases, it is better to use XPath on the DOM structure a library can give you. Still, against popular opinion, there are a few cases when I would strongly recommend using a regex and not a parser library:
Given a few of these conditions:
When you need a one-time update of your HTML files, and you know the structure is consistent.
When you have a very small snippet of HTML.
When you aren't dealing with an HTML file, but a similar templating engine (it can be very hard to find a parser in that case).
When you want to change parts of the HTML, but not all of it - a parser, to my knowledge, cannot answer this request: it will parse the whole document, and save a whole document, changing parts you never wanted to change.
Because HTML can have unlimited nesting of <tags><inside><tags and="<things><that><look></like></tags>"></inside></each></other> and regex can't really cope with that because it can't track a history of what it's descended into and come out of.
A simple construct that illustrates the difficulty:
<body><div id="foo">Hi there! <div id="bar">Bye!</div></div></body>
99.9% of generalized regex-based extraction routines will be unable to correctly give me everything inside the div with the ID foo, because they can't tell the closing tag for that div from the closing tag for the bar div. That is because they have no way of saying "okay, I've now descended into the second of two divs, so the next div close I see brings me back out one, and the one after that is the close tag for the first". Programmers typically respond by devising special-case regexes for the specific situation, which then break as soon as more tags are introduced inside foo and have to be unsnarled at tremendous cost in time and frustration. This is why people get mad about the whole thing.
A regular language is a language that can be matched by a finite state machine.
(Understanding Finite State machines, Push-down machines, and Turing machines is basically the curriculum of a fourth year college CS Course.)
Consider the following machine, which recognizes the string "hi".
(Start) --Read h-->(A)--Read i-->(Succeed)
\ \
\ -- read any other value-->(Fail)
-- read any other value-->(Fail)
This is a simple machine to recognize a regular language; Each expression in parenthesis is a state, and each arrow is a transition. Building a machine like this will allow you to test any input string against a regular language -- hence, a regular expression.
HTML requires you to know more than just what state you are in -- it requires a history of what you have seen before, to match tag nesting. You can accomplish this if you add a stack to the machine, but then it is no longer "regular". This is called a Push-down machine, and recognizes a grammar.
A regular expression is a machine with a finite (and typically rather small) number of discrete states.
To parse XML, C, or any other language with arbitrary nesting of language elements, you need to remember how deep you are. That is, you must be able to count braces/brackets/tags.
You cannot count with finite memory. There may be more brace levels than you have states! You might be able to parse a subset of your language that restricts the number of nesting levels, but it would be very tedious.
A grammar is a formal definition of where words can go. For example, adjectives preceed nouns in English grammar, but follow nouns en la gramática española.
Context-free means that the grammar works universally in all contexts. Context-sensitive means there are additional rules in certain contexts.
In C#, for example, using means something different in using System; at the top of files, than using (var sw = new StringWriter (...)). A more relevant example is the following code within code:
void Start ()
{
string myCode = #"
void Start()
{
Console.WriteLine (""x"");
}
";
}
There's another practical reason for not using regular expressions to parse XML and HTML that has nothing to do with the computer science theory at all: your regular expression will either be hideously complicated, or it will be wrong.
For example, it's all very well writing a regular expression to match
<price>10.65</price>
But if your code is to be correct, then:
It must allow whitespace after the element name in both start and end tag
If the document is in a namespace, then it should allow any namespace prefix to be used
It should probably allow and ignore any unknown attributes appearing in the start tag (depending on the semantics of the particular vocabulary)
It may need to allow whitespace before and after the decimal value (again, depending on the detailed rules of the particular XML vocabulary).
It should not match something that looks like an element, but is actually in a comment or CDATA section (this becomes especially important if there is a possibility of malicious data trying to fool your parser).
It may need to provide diagnostics if the input is invalid.
Of course some of this depends on the quality standards you are applying. We see a lot of problems on StackOverflow with people having to generate XML in a particular way (for example, with no whitespace in the tags) because it is being read by an application that requires it to be written in a particular way. If your code has any kind of longevity then it's important that it should be able to process incoming XML written in any way that the XML standard permits, and not just the one sample input document that you are testing your code on.
So others have gone and given brief definitions for most of these things, but I don't really think they cover WHY normal regex's are what they are.
There are some great resources on what a finite state machine is, but in short, a seminal paper in computer science proved that the basic grammar of regex's (the standard ones, used by grep, not the extended ones, like PCRE) can always be manipulated into a finite-state machine, meaning a 'machine' where you are always in a box, and have a limited number of ways to move to the next box. In short, you can always tell what the next 'thing' you need to do is just by looking at the current character. (And yes, even when it comes to things like 'match at least 4, but no more than 5 times', you can still create a machine like this) (I should note that note that the machine I describe here is technically only a subtype of finite-state machines, but it can implement any other subtype, so...)
This is great because you can always very efficiently evaluate such a machine, even for large inputs. Studying these sorts of questions (how does my algorithm behave when the number of things I feed it gets big) is called studying the computational complexity of the technique. If you're familiar with how a lot of calculus deals with how functions behave as they approach infinity, well, that's pretty much it.
So whats so great about a standard regular expression? Well, any given regex can match a string of length N in no more than O(N) time (meaning that doubling the length of your input doubles the time it takes: it says nothing about the speed for a given input) (of course, some are faster: the regex * could match in O(1), meaning constant, time). The reason is simple: remember, because the system has only a few paths from each state, you never 'go back', and you only need to check each character once. That means even if I pass you a 100 gigabyte file, you'll still be able to crunch through it pretty quickly: which is great!.
Now, its pretty clear why you can't use such a machine to parse arbitrary XML: you can have infinite tags-in-tags, and to parse correctly you need an infinite number of states. But, if you allow recursive replaces, a PCRE is Turing complete: so it could totally parse HTML! Even if you don't, a PCRE can parse any context-free grammar, including XML. So the answer is "yeah, you can". Now, it might take exponential time (you can't use our neat finite-state machine, so you need to use a big fancy parser that can rewind, which means that a crafted expression will take centuries on a big file), but still. Possible.
But lets talk real quick about why that's an awful idea. First of all, while you'll see a ton of people saying "omg, regex's are so powerful", the reality is... they aren't. What they are is simple. The language is dead simple: you only need to know a few meta-characters and their meanings, and you can understand (eventually) anything written in it. However, the issue is that those meta-characters are all you have. See, they can do a lot, but they're meant to express fairly simple things concisely, not to try and describe a complicated process.
And XML sure is complicated. It's pretty easy to find examples in some of the other answers: you can't match stuff inside comment fields, ect. Representing all of that in a programming language takes work: and that's with the benefits of variables and functions! PCRE's, for all their features, can't come close to that. Any hand-made implementation will be buggy: scanning blobs of meta-characters to check matching parenthesis is hard, and it's not like you can comment your code. It'd be easier to define a meta-language, and compile that down to a regex: and at that point, you might as well just take the language you wrote your meta-compiler with and write an XML parser. It'd be easier for you, faster to run, and just better overall.
For more neat info on this, check out this site. It does a great job of explaining all this stuff in layman's terms.
Don't parse XML/HTML with regex, use a proper XML/HTML parser and a powerful xpath query.
theory :
According to the compiling theory, XML/HTML can't be parsed using regex based on finite state machine. Due to hierarchical construction of XML/HTML you need to use a pushdown automaton and manipulate LALR grammar using tool like YACC.
realLife©®™ everyday tool in a shell :
You can use one of the following :
xmllint often installed by default with libxml2, xpath1 (check my wrapper to have newlines delimited output
xmlstarlet can edit, select, transform... Not installed by default, xpath1
xpath installed via perl's module XML::XPath, xpath1
xidel xpath3
saxon-lint my own project, wrapper over #Michael Kay's Saxon-HE Java library, xpath3
or you can use high level languages and proper libs, I think of :
python's lxml (from lxml import etree)
perl's XML::LibXML, XML::XPath, XML::Twig::XPath, HTML::TreeBuilder::XPath
ruby nokogiri, check this example
php DOMXpath, check this example
Check: Using regular expressions with HTML tags
In a purely theoretical sense, it is impossible for regular expressions to parse XML. They are defined in a way that allows them no memory of any previous state, thus preventing the correct matching of an arbitrary tag, and they cannot penetrate to an arbitrary depth of nesting, since the nesting would need to be built into the regular expression.
Modern regex parsers, however, are built for their utility to the developer, rather than their adherence to a precise definition. As such, we have things like back-references and recursion that make use of knowledge of previous states. Using these, it is remarkably simple to create a regex that can explore, validate, or parse XML.
Consider for example,
(?:
<!\-\-[\S\s]*?\-\->
|
<([\w\-\.]+)[^>]*?
(?:
\/>
|
>
(?:
[^<]
|
(?R)
)*
<\/\1>
)
)
This will find the next properly formed XML tag or comment, and it will only find it if it's entire contents are properly formed. (This expression has been tested using Notepad++, which uses Boost C++'s regex library, which closely approximates PCRE.)
Here's how it works:
The first chunk matches a comment. It's necessary for this to come first so that it will deal with any commented-out code that otherwise might cause hang ups.
If that doesn't match, it will look for the beginning of a tag. Note that it uses parentheses to capture the name.
This tag will either end in a />, thus completing the tag, or it will end with a >, in which case it will continue by examining the tag's contents.
It will continue parsing until it reaches a <, at which point it will recurse back to the beginning of the expression, allowing it to deal with either a comment or a new tag.
It will continue through the loop until it arrives at either the end of the text or at a < that it cannot parse. Failing to match will, of course, cause it to start the process over. Otherwise, the < is presumably the beginning of the closing tag for this iteration. Using the back-reference inside a closing tag <\/\1>, it will match the opening tag for the current iteration (depth). There's only one capturing group, so this match is a simple matter. This makes it independent of the names of the tags used, although you could modify the capturing group to capture only specific tags, if you need to.
At this point it will either kick out of the current recursion, up to the next level or end with a match.
This example solves problems dealing with whitespace or identifying relevant content through the use of character groups that merely negate < or >, or in the case of the comments, by using [\S\s], which will match anything, including carriage returns and new lines, even in single-line mode, continuing until it reaches a
-->. Hence, it simply treats everything as valid until it reaches something meaningful.
For most purposes, a regex like this isn't particularly useful. It will validate that XML is properly formed, but that's all it will really do, and it doesn't account for properties (although this would be an easy addition). It's only this simple because it leaves out real world issues like this, as well as definitions of tag names. Fitting it for real use would make it much more of a beast. In general, a true XML parser would be far superior. This one is probably best suited for teaching how recursion works.
Long story short: use an XML parser for real work, and use this if you want to play around with regexes.
There is no day on SO that passes without a question about parsing (X)HTML or XML with regular expressions being asked.
While it's relatively easy to come up with examples that demonstrates the non-viability of regexes for this task or with a collection of expressions to represent the concept, I could still not find on SO a formal explanation of why this is not possible done in layman's terms.
The only formal explanations I could find so far on this site are probably extremely accurate, but also quite cryptic to the self-taught programmer:
the flaw here is that HTML is a Chomsky Type 2 grammar (context free
grammar) and RegEx is a Chomsky Type 3 grammar (regular expression)
or:
Regular expressions can only match regular languages but HTML is a
context-free language.
or:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
or:
The Pumping lemma for regular languages is the reason why you can't do
that.
[To be fair: the majority of the above explanation link to wikipedia pages, but these are not much easier to understand than the answers themselves].
So my question is: could somebody please provide a translation in layman's terms of the formal explanations given above of why it is not possible to use regex for parsing (X)HTML/XML?
EDIT: After reading the first answer I thought that I should clarify: I am looking for a "translation" that also briefely explains the concepts it tries to translate: at the end of an answer, the reader should have a rough idea - for example - of what "regular language" and "context-free grammar" mean...
Concentrate on this one:
A finite automaton (which is the data structure underlying a regular
expression) does not have memory apart from the state it's in, and if
you have arbitrarily deep nesting, you need an arbitrarily large
automaton, which collides with the notion of a finite automaton.
The definition of regular expressions is equivalent to the fact that a test of whether a string matches the pattern can be performed by a finite automaton (one different automaton for each pattern). A finite automaton has no memory - no stack, no heap, no infinite tape to scribble on. All it has is a finite number of internal states, each of which can read a unit of input from the string being tested, and use that to decide which state to move to next. As special cases, it has two termination states: "yes, that matched", and "no, that didn't match".
HTML, on the other hand, has structures that can nest arbitrarily deep. To determine whether a file is valid HTML or not, you need to check that all the closing tags match a previous opening tag. To understand it, you need to know which element is being closed. Without any means to "remember" what opening tags you've seen, no chance.
Note however that most "regex" libraries actually permit more than just the strict definition of regular expressions. If they can match back-references, then they've gone beyond a regular language. So the reason why you shouldn't use a regex library on HTML is a little more complex than the simple fact that HTML is not regular.
The fact that HTML doesn't represent a regular language is a red herring. Regular expression and regular languages sound sort of similar, but are not - they do share the same origin, but there's a notable distance between the academic "regular languages" and the current matching power of engines. In fact, almost all modern regular expression engines support non-regular features - a simple example is (.*)\1. which uses backreferencing to match a repeated sequence of characters - for example 123123, or bonbon. Matching of recursive/balanced structures make these even more fun.
Wikipedia puts this nicely, in a quote by Larry Wall:
'Regular expressions' [...] are only marginally related to real regular expressions. Nevertheless, the term has grown with the capabilities of our pattern matching engines, so I'm not going to try to fight linguistic necessity here. I will, however, generally call them "regexes" (or "regexen", when I'm in an Anglo-Saxon mood).
"Regular expression can only match regular languages", as you can see, is nothing more than a commonly stated fallacy.
So, why not then?
A good reason not to match HTML with regular expression is that "just because you can doesn't mean you should". While may be possible - there are simply better tools for the job. Considering:
Valid HTML is harder/more complex than you may think.
There are many types of "valid" HTML - what is valid in HTML, for example, isn't valid in XHTML.
Much of the free-form HTML found on the internet is not valid anyway. HTML libraries do a good job of dealing with these as well, and were tested for many of these common cases.
Very often it is impossible to match a part of the data without parsing it as a whole. For example, you might be looking for all titles, and end up matching inside a comment or a string literal. <h1>.*?</h1> may be a bold attempt at finding the main title, but it might find:
<!-- <h1>not the title!</h1> -->
Or even:
<script>
var s = "Certainly <h1>not the title!</h1>";
</script>
Last point is the most important:
Using a dedicated HTML parser is better than any regex you can come up with. Very often, XPath allows a better expressive way of finding the data you need, and using an HTML parser is much easier than most people realize.
A good summary of the subject, and an important comment on when mixing Regex and HTML may be appropriate, can be found in Jeff Atwood's blog: Parsing Html The Cthulhu Way.
When is it better to use a regular expression to parse HTML?
In most cases, it is better to use XPath on the DOM structure a library can give you. Still, against popular opinion, there are a few cases when I would strongly recommend using a regex and not a parser library:
Given a few of these conditions:
When you need a one-time update of your HTML files, and you know the structure is consistent.
When you have a very small snippet of HTML.
When you aren't dealing with an HTML file, but a similar templating engine (it can be very hard to find a parser in that case).
When you want to change parts of the HTML, but not all of it - a parser, to my knowledge, cannot answer this request: it will parse the whole document, and save a whole document, changing parts you never wanted to change.
Because HTML can have unlimited nesting of <tags><inside><tags and="<things><that><look></like></tags>"></inside></each></other> and regex can't really cope with that because it can't track a history of what it's descended into and come out of.
A simple construct that illustrates the difficulty:
<body><div id="foo">Hi there! <div id="bar">Bye!</div></div></body>
99.9% of generalized regex-based extraction routines will be unable to correctly give me everything inside the div with the ID foo, because they can't tell the closing tag for that div from the closing tag for the bar div. That is because they have no way of saying "okay, I've now descended into the second of two divs, so the next div close I see brings me back out one, and the one after that is the close tag for the first". Programmers typically respond by devising special-case regexes for the specific situation, which then break as soon as more tags are introduced inside foo and have to be unsnarled at tremendous cost in time and frustration. This is why people get mad about the whole thing.
A regular language is a language that can be matched by a finite state machine.
(Understanding Finite State machines, Push-down machines, and Turing machines is basically the curriculum of a fourth year college CS Course.)
Consider the following machine, which recognizes the string "hi".
(Start) --Read h-->(A)--Read i-->(Succeed)
\ \
\ -- read any other value-->(Fail)
-- read any other value-->(Fail)
This is a simple machine to recognize a regular language; Each expression in parenthesis is a state, and each arrow is a transition. Building a machine like this will allow you to test any input string against a regular language -- hence, a regular expression.
HTML requires you to know more than just what state you are in -- it requires a history of what you have seen before, to match tag nesting. You can accomplish this if you add a stack to the machine, but then it is no longer "regular". This is called a Push-down machine, and recognizes a grammar.
A regular expression is a machine with a finite (and typically rather small) number of discrete states.
To parse XML, C, or any other language with arbitrary nesting of language elements, you need to remember how deep you are. That is, you must be able to count braces/brackets/tags.
You cannot count with finite memory. There may be more brace levels than you have states! You might be able to parse a subset of your language that restricts the number of nesting levels, but it would be very tedious.
A grammar is a formal definition of where words can go. For example, adjectives preceed nouns in English grammar, but follow nouns en la gramática española.
Context-free means that the grammar works universally in all contexts. Context-sensitive means there are additional rules in certain contexts.
In C#, for example, using means something different in using System; at the top of files, than using (var sw = new StringWriter (...)). A more relevant example is the following code within code:
void Start ()
{
string myCode = #"
void Start()
{
Console.WriteLine (""x"");
}
";
}
There's another practical reason for not using regular expressions to parse XML and HTML that has nothing to do with the computer science theory at all: your regular expression will either be hideously complicated, or it will be wrong.
For example, it's all very well writing a regular expression to match
<price>10.65</price>
But if your code is to be correct, then:
It must allow whitespace after the element name in both start and end tag
If the document is in a namespace, then it should allow any namespace prefix to be used
It should probably allow and ignore any unknown attributes appearing in the start tag (depending on the semantics of the particular vocabulary)
It may need to allow whitespace before and after the decimal value (again, depending on the detailed rules of the particular XML vocabulary).
It should not match something that looks like an element, but is actually in a comment or CDATA section (this becomes especially important if there is a possibility of malicious data trying to fool your parser).
It may need to provide diagnostics if the input is invalid.
Of course some of this depends on the quality standards you are applying. We see a lot of problems on StackOverflow with people having to generate XML in a particular way (for example, with no whitespace in the tags) because it is being read by an application that requires it to be written in a particular way. If your code has any kind of longevity then it's important that it should be able to process incoming XML written in any way that the XML standard permits, and not just the one sample input document that you are testing your code on.
So others have gone and given brief definitions for most of these things, but I don't really think they cover WHY normal regex's are what they are.
There are some great resources on what a finite state machine is, but in short, a seminal paper in computer science proved that the basic grammar of regex's (the standard ones, used by grep, not the extended ones, like PCRE) can always be manipulated into a finite-state machine, meaning a 'machine' where you are always in a box, and have a limited number of ways to move to the next box. In short, you can always tell what the next 'thing' you need to do is just by looking at the current character. (And yes, even when it comes to things like 'match at least 4, but no more than 5 times', you can still create a machine like this) (I should note that note that the machine I describe here is technically only a subtype of finite-state machines, but it can implement any other subtype, so...)
This is great because you can always very efficiently evaluate such a machine, even for large inputs. Studying these sorts of questions (how does my algorithm behave when the number of things I feed it gets big) is called studying the computational complexity of the technique. If you're familiar with how a lot of calculus deals with how functions behave as they approach infinity, well, that's pretty much it.
So whats so great about a standard regular expression? Well, any given regex can match a string of length N in no more than O(N) time (meaning that doubling the length of your input doubles the time it takes: it says nothing about the speed for a given input) (of course, some are faster: the regex * could match in O(1), meaning constant, time). The reason is simple: remember, because the system has only a few paths from each state, you never 'go back', and you only need to check each character once. That means even if I pass you a 100 gigabyte file, you'll still be able to crunch through it pretty quickly: which is great!.
Now, its pretty clear why you can't use such a machine to parse arbitrary XML: you can have infinite tags-in-tags, and to parse correctly you need an infinite number of states. But, if you allow recursive replaces, a PCRE is Turing complete: so it could totally parse HTML! Even if you don't, a PCRE can parse any context-free grammar, including XML. So the answer is "yeah, you can". Now, it might take exponential time (you can't use our neat finite-state machine, so you need to use a big fancy parser that can rewind, which means that a crafted expression will take centuries on a big file), but still. Possible.
But lets talk real quick about why that's an awful idea. First of all, while you'll see a ton of people saying "omg, regex's are so powerful", the reality is... they aren't. What they are is simple. The language is dead simple: you only need to know a few meta-characters and their meanings, and you can understand (eventually) anything written in it. However, the issue is that those meta-characters are all you have. See, they can do a lot, but they're meant to express fairly simple things concisely, not to try and describe a complicated process.
And XML sure is complicated. It's pretty easy to find examples in some of the other answers: you can't match stuff inside comment fields, ect. Representing all of that in a programming language takes work: and that's with the benefits of variables and functions! PCRE's, for all their features, can't come close to that. Any hand-made implementation will be buggy: scanning blobs of meta-characters to check matching parenthesis is hard, and it's not like you can comment your code. It'd be easier to define a meta-language, and compile that down to a regex: and at that point, you might as well just take the language you wrote your meta-compiler with and write an XML parser. It'd be easier for you, faster to run, and just better overall.
For more neat info on this, check out this site. It does a great job of explaining all this stuff in layman's terms.
Don't parse XML/HTML with regex, use a proper XML/HTML parser and a powerful xpath query.
theory :
According to the compiling theory, XML/HTML can't be parsed using regex based on finite state machine. Due to hierarchical construction of XML/HTML you need to use a pushdown automaton and manipulate LALR grammar using tool like YACC.
realLife©®™ everyday tool in a shell :
You can use one of the following :
xmllint often installed by default with libxml2, xpath1 (check my wrapper to have newlines delimited output
xmlstarlet can edit, select, transform... Not installed by default, xpath1
xpath installed via perl's module XML::XPath, xpath1
xidel xpath3
saxon-lint my own project, wrapper over #Michael Kay's Saxon-HE Java library, xpath3
or you can use high level languages and proper libs, I think of :
python's lxml (from lxml import etree)
perl's XML::LibXML, XML::XPath, XML::Twig::XPath, HTML::TreeBuilder::XPath
ruby nokogiri, check this example
php DOMXpath, check this example
Check: Using regular expressions with HTML tags
In a purely theoretical sense, it is impossible for regular expressions to parse XML. They are defined in a way that allows them no memory of any previous state, thus preventing the correct matching of an arbitrary tag, and they cannot penetrate to an arbitrary depth of nesting, since the nesting would need to be built into the regular expression.
Modern regex parsers, however, are built for their utility to the developer, rather than their adherence to a precise definition. As such, we have things like back-references and recursion that make use of knowledge of previous states. Using these, it is remarkably simple to create a regex that can explore, validate, or parse XML.
Consider for example,
(?:
<!\-\-[\S\s]*?\-\->
|
<([\w\-\.]+)[^>]*?
(?:
\/>
|
>
(?:
[^<]
|
(?R)
)*
<\/\1>
)
)
This will find the next properly formed XML tag or comment, and it will only find it if it's entire contents are properly formed. (This expression has been tested using Notepad++, which uses Boost C++'s regex library, which closely approximates PCRE.)
Here's how it works:
The first chunk matches a comment. It's necessary for this to come first so that it will deal with any commented-out code that otherwise might cause hang ups.
If that doesn't match, it will look for the beginning of a tag. Note that it uses parentheses to capture the name.
This tag will either end in a />, thus completing the tag, or it will end with a >, in which case it will continue by examining the tag's contents.
It will continue parsing until it reaches a <, at which point it will recurse back to the beginning of the expression, allowing it to deal with either a comment or a new tag.
It will continue through the loop until it arrives at either the end of the text or at a < that it cannot parse. Failing to match will, of course, cause it to start the process over. Otherwise, the < is presumably the beginning of the closing tag for this iteration. Using the back-reference inside a closing tag <\/\1>, it will match the opening tag for the current iteration (depth). There's only one capturing group, so this match is a simple matter. This makes it independent of the names of the tags used, although you could modify the capturing group to capture only specific tags, if you need to.
At this point it will either kick out of the current recursion, up to the next level or end with a match.
This example solves problems dealing with whitespace or identifying relevant content through the use of character groups that merely negate < or >, or in the case of the comments, by using [\S\s], which will match anything, including carriage returns and new lines, even in single-line mode, continuing until it reaches a
-->. Hence, it simply treats everything as valid until it reaches something meaningful.
For most purposes, a regex like this isn't particularly useful. It will validate that XML is properly formed, but that's all it will really do, and it doesn't account for properties (although this would be an easy addition). It's only this simple because it leaves out real world issues like this, as well as definitions of tag names. Fitting it for real use would make it much more of a beast. In general, a true XML parser would be far superior. This one is probably best suited for teaching how recursion works.
Long story short: use an XML parser for real work, and use this if you want to play around with regexes.
I'm new to Java. As a .Net developer, I'm very much used to the Regex class in .Net. The Java implementation of Regex (Regular Expressions) is not bad but it's missing some key features.
I wanted to create my own helper class for Java but I thought maybe there is already one available. So is there any free and easy-to-use product available for Regex in Java or should I create one myself?
If I would write my own class, where do you think I should share it for the others to use it?
[Edit]
There were complaints that I wasn't addressing the problem with the current Regex class. I'll try to clarify my question.
In .Net the usage of a regular expression is easier than in Java. Since both languages are object oriented and very similar in many aspects, I expect to have a similar experience with using regex in both languages. Unfortunately that's not the case.
Here's a little code compared in Java and C#. The first is C# and the second is Java:
In C#:
string source = "The colour of my bag matches the color of my shirt!";
string pattern = "colou?r";
foreach(Match match in Regex.Matches(source, pattern))
{
Console.WriteLine(match.Value);
}
In Java:
String source = "The colour of my bag matches the color of my shirt!";
String pattern = "colou?r";
Pattern p = Pattern.compile(pattern);
Matcher m = p.matcher(source);
while(m.find())
{
System.out.println(source.substring(m.start(), m.end()));
}
I tried to be fair to both languages in the sample code above.
The first thing you notice here is the .Value member of the Match class (compared to using .start() and .end() in Java).
Why should I create two objects when I can call a static function like Regex.Matches or Regex.Match, etc.?
In more advanced usages, the difference shows itself much more. Look at the method Groups, dictionary length, Capture, Index, Length, Success, etc. These are all very necessary features that in my opinion should be available for Java too.
Of course all of these features can be manually added by a custom proxy (helper) class. This is main reason why I asked this question. We don't have the breeze of Regex in Perl but at least we can use the .Net approach to Regex which I think is very cleverly designed.
From your edited example, I can now see what you would like. And you have my sympathies in this, too. Java’s regexes are a long, long, long ways from the convenience you find in Ruby or Perl. And they pretty much always will be; this cannot be fixed, so we’re stuck with this mess forever — at least in Java. Other JVM languages do a better job at this, especially Groovy. But they still suffer some of the inherent flaws, and can only go so far.
Where to begin? There are the so-called convenience methods of the String class: matches, replaceAll, replaceFirst, and split. These can sometimes be ok in small programs, depending how you use them. However, they do indeed have several problems, which it appears you have discovered. Here’s a partial list of those problems, and what can and cannot be done about them.
The inconvenience method is very bizarrely named “matches” but it requires you to pad your regex on both sides to match the entire string. This counter-intuitive sense is contrary to any sense of the word match as used in any previous language, and constantly bites people. Patterns passed into the other 3 inconvenience methods work very unlike this one, because in the other 3, they work like normal patterns work everywhere else; just not in matches. This means you can’t just copy your patterns around, even within methods in the same darned class for goodness’ sake! And there is no find convenience method to do what every other matcher in the world does. The matches method should have been called something like FullMatch, and there should have been a PartialMatch or find method added to the String class.
There is no API that allows you to pass in Pattern.compile flags along with the strings you use for the 4 pattern-related convenience methods of the String class. That means you have to rely on string versions like (?i) and (?x), but those do not exist for all possible Pattern compilation flags. This is highly inconvenient to say the least.
The split method does not return the same result in edge cases as split returns in the languages that Java borrowed split from. This is a sneaky little gotcha. How many elements do you think you should get back in the return list if you split the empty string, eh? Java manufacturers a fake return element where there should be one, which means you can’t distinguish between legit results and bogus ones. It is a serious design flaw that splitting on a ":", you cannot tell the difference between inputs of "" vs of ":". Aw, gee! Don’t people ever test this stuff? And again, the broken and fundamentally unreliable behavior is unfixable: you must never change things, even broken things. It’s not ok to break broken things in Java the wayt it is anywhere else. Broken is forever here.
The backslash notation of regexes conflicts with the backslash notation used in strings. This makes it superduper awkward, and error-prone, too, because you have to constantly add lots of backslashes to everything, and it’s too easy to forget one and get neither warning nor success. Simple patterns like \b\w+\b become nightmares in typographical excess: "\\b\\w+\\b". Good luck with reading that. Some people use a slash-inverter function on their patterns so that they can write that as "/b/w+/b" instead. Other than reading in your patterns from a string, there is no way to construct your pattern in a WYSIWYG literal fashion; it’s always heavy-laden with backslashes. Did you get them all, and enough, and in the right places? If so, it makes it really really hard to read. If it isn’t, you probably haven’t gotten them all. At least JVM languages like Groovy have figured out the right answer here: give people 1st-class regexes so you don’t go nuts. Here’s a fair collection of Groovy regex examples showing how simple it can and should be.
The (?x) mode is deeply flawed. It doesn’t take comments in the Java style of // COMMENT but rather in the shell style of # COMMENT. It doesn’t work with multiline strings. It doesn’t accept literals as literals, forcing the backslash problems listed above, which fundamentally compromises any attempt at lining things up, like having all comments begin on the same column. Because of the backslashes, you either make them begin on the same column in the source code string and screw them up if you print them out, or vice versa. So much for legibility!
It is incredibly difficult — and indeed, fundamentally unfixably broken — to enter Unicode characters in a regex. There is no support for symbolically named characters like \N{QUOTATION MARK}, \N{LATIN SMALL LETTER E WITH GRAVE}, or \N{MATHEMATICAL BOLD CAPITAL C}. That means you’re stuck with unmaintainable magic numbers. And you cannot even enter them by code point, either. You cannot use \u0022 for the first one because the Java preprocessor makes that a syntax error. So then you move to \\u0022 instead, which works until you get to the next one, \\u00E8, which cannot be entered that way or it will break the CANON_EQ flag. And the last one is a pure nightmare: its code point is U+1D402, but Java does not support the full Unicode set using their code point numbers in regexes, forcing you to get out your calculator to figure out that that is \uD835\uDC02 or \\uD835\\uDC02 (but not \\uD835\uDC02), madly enough. But you cannot use those in character classes due to a design bug, making it impossible to match say, [\N{MATHEMATICAL BOLD CAPITAL A}-\N{MATHEMATICAL BOLD CAPITAL Z}] because the regex compiler screws up on the UTF-16. Again, this can never be fixed or it will change old programs. You cannot even get around the bug by using the normal workaround to Java’s Unicode-in-source-code troubles by compiling with java -encoding UTF-8, because the stupid thing stores the strings as nasty UTF-16, which necessarily breaks them in character classes. OOPS!
Many of the regex things we’ve come to rely on in other languages are missing from Java. There are no named groups for examples, nor even relatively-numbered ones. This makes constructing larger patterns out of smaller ones fundamentally error prone. There is a front-end library that allows you to have simple named groups, and indeed this will finally arrive in production JDK7. But even so there is no mechanism for what to do with more than one group by the same name. And you still don’t have relatively numbered buffers, either. We’re back to the Bad Old Days again, stuff that was solved aeons ago.
There is no support a linebreak sequence, which is one of the only two “Strongly Recommended” parts of the standard, which suggests that \R be used for such. This is awkward to emulate because of its variable-length nature and Java’s lack of support for graphemes.
The character class escapes do not work on Java’s native character set! Yes, that’s right: routine stuff like \w and \s (or rather, "\\w" and "\\b") does not work on Unicode in Java! This is not the cool sort of retro. To make matters worse, Java’s \b (make that "\\b", which isn’t the same as "\b") does have some Unicode sensibility, although not what the standard says it must have. So for example a string like "élève" will never in Java match the pattern \b\w+\b, and not merely in entirety per Pattern.matches, but indeed at no point whatsoever as you might get from Pattern.find. This is just so screwed up as to beggar belief. They’ve broken the inherent connection between \w and \b, then misdefined them to boot!! It doesn’t even know what Unicode Alphabetic code points are. This is supremely broken, and they can never fix it because that would change the behavior of existing code, which is strictly forbidden in the Java Universe. The best you can do is create a rewrite library that acts as a front end before it gets to the compile phase; that way you can forcibly migrate your patterns from the 1960s into the 21st century of text processing.
The only two Unicode properties supported are the General Categories and the Block properties. The general category properties only support the abbreviations like \p{Sk}, contrary to the standards Strong Recommendation to also allow \p{Modifier Symbol}, \p{Modifier_Symbol}, etc. You don’t even get the required aliases the standard says you should. That makes your code even more unreadable and unmaintainable. You will finally get support for the Script property in production JDK7, but that is still seriously short of the mininum set of 11 essential properties that the Standard says you must provide for even the minimal level of Unicode support.
Some of the meagre properties that Java does provide are faux amis: they have the same names as official Unicode propoperty names, but they do something altogether different. For example, Unicode requires that \p{alpha} be the same as \p{Alphabetic}, but Java makes it the archaic and no-longer-quaint 7-bit alphabetics only, which is more than 4 orders of magnitude too few. Whitespace is another flaw, since you use the Java version that masquerades as Unicode whitespace, your UTF-8 parsers will break because of their NO-BREAK SPACE code points, which Unicode normatively requires be deemed whitespace, but Java ignores that requirement, so breaks your parser.
There is no support for graphemes, the way \X normally provides. That renders impossible innumerably many common tasks that you need and want to do with regexes. Not only are extended grapheme clusters out of your reach, because Java supports almost none of the Unicode properties, you cannot even approximate the old legacy grapheme clusters using the standard (?:\p{Grapheme_Base}\p{Grapheme_Extend}]*). Not being able to work with graphemes makes even the simplest sorts of Unicode text processing impossible. For example, you cannot match a vowel irrespective of diacritic in Java. The way you do this in a language with grapheme supports varies, but at the very least you should be able to throw the thing into NFD and match (?:(?=[aeiou])\X). In Java, you cannot do even that much: graphemes are beyond your reach. And that means Java cannot even handle its own native character set. It gives you Unicode and then makes it impossible to work with it.
The convenience methods in the String class do not cache the compiled regex. In fact, there is no such thing as a compile-time pattern that gets syntax-checked at compile time — which is when syntax checking is supposed to occur. That means your program, which uses nothing but constant regexes fully understood at compile time, will bomb out with an exception in the middle of its run if you forget a little backslash here or there as one is wont to do due to the flaws previously discussed. Even Groovy gets this part right. Regexes are far too high-level a construct to be dealt with by Java’s unpleasant after-the-fact, bolted-on-the-side model — and they are far too important to routine text processing to be ignored. Java is much too low-level a language for this stuff, and it fails to provide the simple mechanics out of which might yourself build what you need: you can’t get there from here.
The String and Pattern classes are marked final in Java. That completely kills any possibility of using proper OO design to extend those classes. You can’t create a better version of a matches method by subclassing and replacement. Heck, you can’t even subclass! Final is not a solution; final is a death sentence from which there is no appeal.
Finally, to show you just how brain-damaged Java’s truly regexes are, consider this multiline pattern, which shows many of the flaws already described:
String rx =
"(?= ^ \\p{Lu} [_\\pL\\pM\\d\\-] + \$)\n"
+ " # next is a big can't-have set \n"
+ "(?! ^ .* \n"
+ " (?: ^ \\d+ $ \n"
+ " | ^ \\p{Lu} - \\p{Lu} $ \n"
+ " | Invitrogen \n"
+ " | Clontech \n"
+ " | L-L-X-X # dashes ok \n"
+ " | Sarstedt \n"
+ " | Roche \n"
+ " | Beckman \n"
+ " | Bayer \n"
+ " ) # end alternatives \n"
+ " \\b # only on a word boundary \n"
+ ") # end negated lookahead \n"
;
Do you see how unnatural that is? You have to put literal newlines in your strings; you have to use non-Java comments; you cannot make anything line up because of the extra backslashes; you have to use definitions of things that don’t work right on Unicode. There are many more problems beyond that.
Not only are there no plans to fix almost any of these grievous flaws, it is indeed impossible to fix almost any of them at all, because you change old programs. Even the normal tools of OO design are forbidden to you because it’s all locked down with the finality of a death sentence, and it cannot be fixed.
So Alireza Noori, if you feel Java’s clumsy regexes are too hosed for reliable and convenient regex processing ever to be possible in Java, I cannot gainsay you. Sorry, but that’s just the way it is.
“Fixed in the Next Release!”
Just because some things can never be fixed does not mean that nothing can ever be fixed. It just has to be done very carefully. Here are the things I know of which are already fixed in current JDK7 or proposed JDK8 builds:
The Unicode Script property is now supported. You may use any of the equivalent forms \p{Script=Greek}, \p{sc=Greek}, \p{IsGreek}, or \p{Greek}. This is inherently superior to the old clunky block properties. It means you can do things like [\p{Latin}\p{Common}\p{Inherited}], which is quite important.
The UTF-16 bug has a workaround. You may now specify any Unicode code point by its number using the \x{⋯} notation, such as \x{1D402}. This works even inside character classes, finally allowing [\x{1D400}-\x{1D419}] to work properly. You still must double backslash it though, and it only works in regexex, not strings in general as it really ought to.
Named groups are now supported via the standard notation (?<NAME>⋯) to create it and \k<NAME> to backreference it. These still contribute to numeric group numbers, too. However, you cannot get at more than one of them in the same pattern, nor can you use them for recursion.
A new Pattern compile flag, Pattern.UNICODE_CHARACTER_CLASSES and associated embeddable switch, (?U), will now swap around all the definitions of things like \w, \b, \p{alpha}, and \p{punct}, so that they now conform to the definitions of those things required by The Unicode Standard.
The missing or misdefined binary properties \p{IsLowercase}, \p{IsUppercase}, and \p{IsAlphabetic} will now be supported, and these correspond to methods in the Character class. This is important because Unicode makes a significant and pervasive distinction between mere letters and cased or alphabetic code points. These key properties are among those 11 essential properties that are absolutely required for Level 1 compliance with UTS#18, “Unicode Regular Expresions”, without which you really cannot work with Unicode.
These enhancements and fixes are very important to finally have, and so I am glad, even excited, to have them.
But for industrial-strength, state-of-the-art regex and/or Unicode work, I will not be using Java. There’s just too much missing from Java’s still-patchy-after-20-years Unicode model to get real work done if you dare to use the character set that Java gives. And the bolted-on-the-side model never works, which is all Java regexes are. You have to start over from first principles, the way Groovy did.
Sure, it might work for very limited applications whose small customer base is limited to English-language monoglots rural Iowa with no external interactions or any need for characters beyond what an old-style telegraph could send. But for how many projects is that really true? Fewer even that you think, it turns out.
It is for this reason that a certain (and obvious) multi-billion-dollar just recently cancelled international deployment of an important application. Java’s Unicode support — not just in regexes, but throughout — proved to be too weak for the needed internationalization to be done reliably in Java. Because of this, they have been forced to scale back from their originally planned wordwide deployment to a merely U.S. deployment. It’s positively parochial. And no, there are Nᴏᴛ Hᴀᴘᴘʏ; would you be?
Java has had 20 years to get it right, and they demonstrably have not done so thus far, so I wouldn’t hold my breath. Or throw good money after bad; the lesson here is to ignore the hype and instead apply due diligence to make very sure that all the necessary infrastructure support is there before you invest too much. Otherwise you too may get stuck without any real options once you’re too far into it to rescue your project.
Caveat Emptor
One can rant, or one can simply write:
public class Regex {
/**
* #param source
* the string to scan
* #param pattern
* the regular expression to scan for
* #return the matched
*/
public static Iterable<String> matches(final String source, final String pattern) {
final Pattern p = Pattern.compile(pattern);
final Matcher m = p.matcher(source);
return new Iterable<String>() {
#Override
public Iterator<String> iterator() {
return new Iterator<String>() {
#Override
public boolean hasNext() {
return m.find();
}
#Override
public String next() {
return source.substring(m.start(), m.end());
}
#Override
public void remove() {
throw new UnsupportedOperationException();
}
};
}
};
}
}
Used as you wish:
public class RegexTest {
#Test
public void test() {
String source = "The colour of my bag matches the color of my shirt!";
String pattern = "colou?r";
for (String match : Regex.matches(source, pattern)) {
System.out.println(match);
}
}
}
Some of the API flaws mentioned in #tchrist's answer were fixed in Kotlin.
Boy, do I hear you on that one Alireza! Regex's are confusing enough without there being so many syntax variations amonng them. I too do a lot more C# than Java programming and had the same issue.
I found this to be very helpful:
http://www.tusker.org/regex/regex_benchmark.html
- it's a list of alternative regular expression implementations for Java, benchmarked.
This one is darned good, if I do say so myself!
regex-tester-tool
I want to extract the words that begin with a capital — including accented capitals — using regular expressions in Java.
This is my conditional for words beginning with capital A through Z:
if (link.text().matches("^[A-Z].+") == true)
But I also want words that begin with an accented uppercase character, too.
Do you have any ideas?
Start with http://download.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html
\p{javaUpperCase} Equivalent to java.lang.Character.isUpperCase()
To match an uppercase letter at the beginning of the string, you need the pattern ^\p{Lu}.
Unfortunately, Java does not support the mandatory \p{Uppercase} property, necessary for meeting UTS#18’s RL1.2.
That’s hardly the only thing missing from Java regular expressions to meet even Level 1, the most bareboned Basic Unicode Functionality. Without Level 1, you really can’t work with Unicode test using regular expressions. Too much is broken or absent.
UTS#18’s RL1.1 will finally be met with JDK7, but I do not believe there are currently any plans to meet RL1.2, RL1.2a, or any of the others that it’s currently lacking, nor even meeting the two Strong Recommendations. Alas!
Indeed, of the very short list of mandatory properties required by RL1.2, Java is missing the \p{Alphabetic}, \p{Uppercase}, \p{Lowercase}, \p{White_Space}, \p{Noncharacter_Code_Point}, \p{Default_Ignorable_Code_Point}, \p{ANY}, and \p{ASSIGNED} properties. Those are all mandatory but either completely missing or else fail to obey The Unicode Standard with respect to their definitions. This is also the problem with the POSIX compatible properties in Java: they’re all broken with respect to UTS#18.
Prior to JDK7, it is also missing the mandatory Script properties. JDK7 does get script properties at long last, but that’s all — nothing else. Java is still light years away from meeting even RL1.2a, which is a daily gotcha for zillions of programmers.
In JDK7, you can finally also two-part properties in the form \p{name=value} if they’re block, script, or general categories. That means these are all the same in JDK7’s Pattern class:
\p{Block=Number_Forms}, \p{blk=Number_Forms}, and \p{InNumber_Forms}.
\p{Script=Latin}, \p{sc=Latin}, \p{IsLatin}, and \p{Latin}.
\p{General_Category=Lu}, \p{GC=Lu}, and \p{Lu}.
However, you still cannot use the the long forms like \p{Lowercase_Letter} and \p{Letter_Number}, and the POSIX-looking properties are all broken from RL1.2a’s perspective. Plus super-basic properties from RL1.2 like \p{White_Space} and \p{Alphabetic} are still missing.
There was some talk of trying to fix \b and \B, which are miserably broken with respect to \w and \W, but I don't know how they’re going to fix all that without fully complying with RL1.2a. And no, I have no idea when they will add those basic properties to Java. You can’t get by without them, either.
To fully work with Unicode using regexes in Java at even Level 1, you really cannot use the standard Pattern class that Java comes with. The easiest way to do so is to instead use JNI to connect up with ICU regex libraries using the Google Android code, which is available.
There do exist other languages that are at least Level-1 compliant (or better) with UTS#18, but if you want to stay within Java, ICU is currently your own real option.
java has an method java.lang.Character.isUpperCase, its not exactly a regular expression, but might satisfy.
http://download.oracle.com/javase/1.5.0/docs/api/java/lang/Character.html#isUpperCase(int)