Performance Comparison of Markdown Processors for the JVM

Jan 10, 2011 12:15 · 1632 words · 8 minutes read Scala Markdown Actuarius

Update (7th August 2017): This and other articles are kept for reference. Please note that Actuarius is no longer maintained by me but is now a part of the Lift Web Framework

Some Words On Optimization

In my opinion, optimization should always be the last step when writing code. I strongly tend to adhere to the famous saying by Donald Knuth:

“Premature optimization is the root of all evil.”

I always try hard to write clean code first and then apply optimizations only when they are necessary. As a friend of mine constantly reminds me:

“Good developers ship.”

(Gotta find out where that quote is from) It just happens too easily that one gets stuck trying to get some part of the code as fast as possible and then it is only 1 day to the deadline and only 50% of the project is done. No matter how fast those 50% are: the project is a failure! Also, trying to be as fast as possible right from the start tends to end in messy code.

So before I ran the first performance tests, I had already added Actuarius to my Blog engine and the articles where displayed without noticeable delay. It already passed the first speed criterion for a piece of software: it was fast enough. Yet there are two other competitors, both with more features and established projects, so after having finished the first Beta of Actuarius, I was of course interested in how well it performed in comparison with those other two. As it has the least features, it should at least be fast!

My first test was running all three implementations over all my Blog posts a thousand times. That gave me a bit of a shock: PegDown was 5 times faster than Actuarius, Knockoff ten times. Ouch. So I spent some more time in tracking down the weak points of Actuarius. Knockoff was written in Scala using Parser Combinators, too, so I couldn’t blame the tools I used. I learned some interesting stuff about the performance of Parser Combinators when speeding up Actuarius, I hope to write them down in the next couple of days. For the impatient: After I was done speed-refactoring Actuarius, it was twice as fast as PegDown, and four times as fast as Knockoff, so I hereby boldly claim: Actuarius is the fastest Markdown Parser on the JVM. Of course, being the author of Actuarius I am hardly neutral, so here follow the test details. I will publish the testing code in a couple of days.

The Tests

During my initial tests I noticed that the speed of each Markdown processor depended heavily on the input. So the extensive tests contain a number of scenarios that I found useful during development to track down weak points. Each basically tests one feature of the Markdown syntax. The overall speed result I quote is created by a test text that contains a bit of each, in ratios that I think can be considered “realistic”. The tests where run on my Laptop with an Intel i5 Processor with 2 and something GHz and two cores, the VM was given 2GB RAM to reduce the influence of garbage collection. The results presented here are from a run using the official Sun JRE (Version 1.6.0_22), I repeated the tests with OpenJDK but the results were almost exactly the same.

All tests presented here are based on the same test text. I built a simple markdown test generator to generate & modify the test cases more easily. The test text is De finibus bonorum et malorum which is said to be the basis for the commonly used Lorem Ipsum test text. The generator rudely chops it up in a configurable number of paragraphs, lines per paragraph and approximate chars per line. Depending on the test, it “decorates” parts of the text with Markdown syntax. Here are the descriptions of the tests I ran with a table of the results at the end (all tests consist of 30 Paragraphs with 20 lines each and approx. 80 chars per line (before “decoration”)):

  • Plain Paragraphs: No modifications, just the plain text.
  • Emphasis: Every word emphasized, so the input foo is decorated as *foo* which should render as <em>foo</em>
  • Strong: Every word emphasized, so foo**foo**<strong>foo</strong>
  • Inline Code: Every word marked as inline code: foo\foo\\<code>foo</code>
  • Fast Links: Every word a link without title or wrapped text: foo<foo><a href="foo">foo</a>
  • Special XML Chars: Every word is replaced by chars that need to be escaped in XML: foo"><&&quot;&gt;&lt;'&amp;
  • Inline HTML: Every word is wrapped in verbatim HTML: foo<blink>foo</blink><blink>foo</blink>
  • Manual Line Breaks: Every line gets appended two spaces: some linesome line \\nsome line<br/>\\n
  • Full Links: Every word is turned into a link with wrapped text, url and title: foo[foo]( "foo Title")<a href="" title="foo Title">foo</a>
  • Full Images: Like full links, every word turned into an image reference
  • Reference Links: Every word is turned into a link reference with an increasing id counter. The link reference definition is added after the paragraph the link occurs in: foo[foo][id123]<a href="" title="foo Title">foo</a>
  • Block Quotes: Every paragraph is turned into a block quote by prepending a > to each line.
  • Code Blocks: Every paragraph is turned into a code block by prepending four spaces to each line.
  • Unordered Lists: Every paragraph is turned into a list with an item for each line by prepending * to each line.
  • Mixed Test: A mix of all of the above: Some paragraphs lists, some code, some word emphasized etc.

The Results

I gave each processor two runs, each run consisted of parsing the input 100 times. The time difference between the two runs illustrates nicely how the JIT compiler “sinks its teeth in”. Interestingly, the Scala implementations benefit much more from it than PegDown. This might be because of all the anonymous functions and closures which get inlined (I am just wildly guessing here).

1st Run (ms)2nd Run (ms)1st Run (ms)2nd Run (ms)1st Run (ms)2nd Run (ms)
Inline Code25419772070660355981
Fast Link117565339938126652230
XML Chars88080320622081220221
Manual HTML2036161060760124222300
Line breaks449410904868953694
Full link24621274268511191129
Full image1469976076911931150
Reference link6325596233431326747669775939
Mixed Test197518043537353366296528

Discussion & Conclusion

Most of these tests are of course unrealistic: Who would write a text where each word is a link? Yet they serve an important use: It makes it possible for the developer to pinpoint the parts of the parser where there is most room for improvement. Also, it explains why certain texts might render much faster in one Processor than in another.

Interestingly, Knockoff performs much worse than my initial tests using my own blog posts indicated. The reason is that it is really fast on plain text and code, which is what my posts consist of mostly. One can see above that the performance for emphasized text is orders of magnitude worse than for normal text. What gives Actuarius its edge is its fast performance on plain text and verbatim code (I have written some rather low-level manual parsers for that which reduce backtracking siginficantly), otherwise it is more or less on one level with PegDown.

It also shows Actuarius’ weak spot: the inline XML handling is still pretty slow. I still have to do some tuning there. To be fair, I have to admit that Actuarius has one other serious weak spot: If you have a file where each line starts with an opening XML tag, it will display a quadratic complexity. The reason for this is that according to Markdown rules a verbatim XML/HTML block is started by an opening XML tag in the first column of the line. If every line starts with an opening tag, Actuarius will go through all consecutive lines looking for a closing XML tag. If that is not found, it will decide that no XML block starts on that line. It will however try again with the opening tag on the next line (as it has forgotten there is no closing tag coming). So for each of the lines, it will try and parse an XML block and only detect that this is not possible when it reaches the end of input. Interestingly, Knockoff displays the same behavior, while PegDown has no problem with such input. I will definitely try and fix that in the next version of Actuarius.

One thing I also really like about this test: it proves that functional languages do not have to be slow! I am a big fan of functional programming, but even I sometimes have this nagging doubt that it might be slow. Well, I sleep better know!

Oh, and one more important point: Actuarius is fastest. (Well, except for inline XML…)


The author of PegDown has already announced that he will tune PegDown once he gets the detailled tests to make it fastest of the three. So keep your eyes peeled for the newest PegDown version! Once it is out, I will run the comparison again. Then again, I still know of a couple of areas in Actuarius where a significant speed boost is possible. I won’t give up easily ;-)

Knockoff does not look so good performance-wise when we look at its handling of inline syntax. However, I really like Knockoff’s concept of making the syntax tree resulting from parsing available as an intermediate result to allow custom modification of the output (especially creating an table of contents). So I think I will steal this idea let myself get inspired by this idea, as Actuarius already uses similar structures as intermediate parsing results anyway, and make them available as result in a later version (so much to do, so little time).