You can build a parser around regexes though, where most of the code is regexes and then you have a little bit of code to deal with the irregularity. For instance consider arithmethic expressions consisting of constants, -, +, *, /, and parentheses. You could evaluate that using something like (expression to parse a numeral is left as exercise to reader).
while expression is not a numeral
replace all "\((NUMERAL)\)" with first group
if find first "(NUMERAL)([*/])(NUMERAL)"
replace with result
else if find first "(NUMERAL)([+-])(NUMERAL)"
replace with result
What you are doing there is conflating the lexing phase and the parsing phase. Regexes are perfect for recognizing tokens, but for a language with nested parentheses, you must have a push-down automaton to process it. Otherwise, you will not be able to verify that your delimiters are matched.
That's one way of looking at it I suppose. But it does require stretching the definition of token a little bit if you are eg using regexes to identify arbitrarily long multiplications. I think my prefered perspective is that you are using regexes to parse regular sublanguages and then something more powerful (e.g. a push down automaton, but there are even more powerful parsers than that) to bind those sub-parsers together.