IBM Skip to main content
Search for:   within 
      Search help  
     IBM home  |  Products & services  |  Support & downloads   |  My account

developerWorks > Java technology
Eye on performance: Improve your development processes
60 KBe-mail it!
Compilation speed
Exceptions are expensive
Maximum heap size
About the authors
Rate this article
Related content:
Eye on performance series
Jikes compiler
Optimize your Java application's performance
Spotlight on Java performance
dW newsletters
dW Subscription
(CDs and downloads)
Compilation speed, exceptions, and heap size get the regulars at the Big Moose Saloon talking

Level: Introductory

Jack Shirazi ( your development processes), Director,
Kirk Pepperdine ( your development processes), CTO,

30 Jul 2003

Column iconPerformance. It's the one aspect of the Java platform that continually takes abuse. But the overwhelming success of the platform on other fronts makes performance issues worth serious investigation. In this new column, Intrepid optimizers Jack Shirazi and Kirk Pepperdine, Director and CTO of, follow performance discussions all over the Internet, expanding and clarifying the issues they encounter. This month, they head over to the JavaRanch to cover discussions on compilation speed, exceptions, and heap size tuning.

This past month we spent a lot of time down at the JavaRanch's Big Moose Saloon to see what kind of performance questions the JavaRanch greenhorns are asking. Most are about J2SE and development procedures -- questions about the Java language, core classes, and how to improve their development processes.

Compilation speed

Have you found your compilation phase to be slow? Does javac take too long? Then try the Jikes compiler to add that extra "zing" when creating .class files. That's the all new, extra-fresh Jikes, with complete Java source support. (May cause VerifyError, not all javac options supported, bytecode may not be as advertised, and your performance may vary. Always read the manual before use.)

OK, so the discussions at the JavaRanch about Jikes weren't quite as direct as our homemade advert here, but several readers definitely suggested that the Jikes Java compiler was designed for speedy compilation. That's useful to know, especially for projects with many files to compile. Beware, though, that while Jikes can help speed up your development process, you are probably better off doing your final compilation with the compiler that comes with the JVM that you will be using in production. Things can be different enough across JVM versions that problems can occur when using compilers that are different from their JVMs.

Exceptions are expensive
Yes, exceptions are expensive. So does that mean you should never use them? Of course not. But when should you use exceptions and when shouldn't you? Unfortunately, there's no simple answer.

We can say that you don't need to abandon the good try-catch programming practices you've been taught, but there is one instance where you'll run into trouble: creating exceptions. When an exception is created, it needs to gather a stack trace describing where it was created. Remember those stack traces you see printed out when an unexpected exception is thrown in your code? Like this one:

Exception in thread "main" my.corp.DidntExpectThisException
        at T.noExceptionsHere(
        at T.main(

Building those stack traces requires taking a snapshot of the runtime stack, and that's the expensive part. The runtime stack is not designed for efficient exception creation; it's designed to let the runtime run as fast as possible. Push and pop, push and pop. Get the job done, with no unnecessary delays. But when an Exception has to be created, the JVM needs to say "freeze, I want an nice picture of you now, so stop that pushing and popping and smile nicely until I'm done." A stack trace doesn't just contain one or two elements from the stack, it contains every element, from top to bottom, with line numbers and everything. If the exception gets created in a stack of depth 20, there's no option that says you can just record the top few stack elements -- you get all 20. The stack is recorded all the way from main or (at the bottom of the stack) right up to the top.

So creating exceptions is the expensive bit. Technically, the stack trace snapshot happens in the native method Throwable.fillInStackTrace(), which is called from the Throwable contructor. But that doesn't change anything -- if you create an Exception, you are going to pay the cost. But the good news is that catching the exceptions is not the expensive bit, so you can use try-catch to your heart's content. You can also define throws clauses in your method definitions without performance penalty, such as:

public Blah myMethod(Foo x) throws SomeBarException {

Technically, you can even throw exceptions freely without too much cost. It's not the throw operation that incurs the performance cost -- although it is unusual to throw an exception without first creating one. It's creating the exception that costs you.

try {
  if (true)
    throw new SomeException(); //cos my program runs too fast
catch(SomeException e) {

Fortunately, good programming practice already teaches us that you should not be throwing exceptions willy-nilly. Exceptions are designed for exceptional conditions, and should be kept that way. But just in case you don't like following good programming practices, the Java language gives you an added incentive by making your program run faster if you do.

A word from Jack and Kirk, the "performance guys"
We all understand why performance is important in computing. Hardware and software developers have been heavily focused on performance from the beginning, from Alan Turing attempting to discover enemy encryption keys faster so that lives could be saved, to Seymour Cray offering the beauty and balance that was found in his trademark supercomputers, to the pure power required by Deep Blue to compete with the major computational engine contained in Garry Kasparov's head. Though we seek to maximize performance in the programs we develop, we often fail to notice that in many instances our environment is already tuned for performance. It's our goal in writing these tips each month to help shed a different light on your particular performance problem du jour.

Maximum heap size
In all the discussion groups we visited, questions on tuning the JVM heap kept cropping up. One JavaRanch discussion started out with the basic question "What should be the maximum heap size setting?" Before delving into the details, let's first go over the basics of memory management in the Java runtime.

The JVM has a memory space that it manages. The part of the space where objects live (and die) is called the heap space. Objects are created in the heap space, and they are moved around the heap space by the JVM garbage collector at various times, such as when defragmenting (or compacting) the heap. Objects can die in the heap, too. A dead object is simply one that is no longer accessible by the application. The JVM garbage collector looks for these dead objects and reclaims the space they used, in order to make space available for new objects. When the garbage collector can no longer free-up space by reclaiming dead objects, the heap is said to be full.

A full heap is a problem. When the heap is full and the application tries to create more objects, the JVM can ask the underlying operating system for more memory, so it can make the heap larger. If the JVM cannot obtain more memory, then allocating a new object will throw OutOfMemoryError. And unless your application is pretty sophisticated, that usually means that your application crashes.

So what can we do about it? Most JVMs have an optional parameter that specifies the largest size that the heap is allowed grow to. After this size is reached, the JVM is no longer allowed to request more memory from the operating system. In recent JVMs from Sun and IBM, that parameter is specified with the -Xmx option. Older JVMs used a -mx parameter, and most JVMs still understand that option. Application servers have their own configuration parameter that specifies the maximum heap size, and these usually feed through to the -Xmx parameter. In the absence of explicitly using the -Xmx parameter, the JVM has a default maximum heap size, which of course is vendor- and version-specific. The Sun 1.4 JVM defaults to a 64-megabyte maximum heap size.

So what should the maximum heap size be to ensure optimal performance? You might think the answer is "as large as possible," so that you stave off out-of-memory errors and give your application as much memory as it can use. Well, it turns out that too large a heap can be a significant problem because of the way operating systems work. Specifically, modern operating systems have a real memory and a virtual memory. Virtual memory creates the illusion of having more memory than you actually have by supplementing real memory with disk space in swap files, which act as a kind of overflow memory. The operating system can take pages that are not being actively used and put them on the disk until they are needed again, freeing up real memory (temporarily) for other uses. This way, the available memory can appear to be larger than the real memory, allowing more or larger processes to run. The trade-off is that those pages on the disk have to be moved back to the real memory when they are needed, and that can be really slow. Disks are a lot slower than memory is.

If you allow the heap to get bigger than the real memory of the system (the physical RAM installed on your machine), then your heap can start paging. That in itself might not be such a problem -- after all, only the infrequently used pages are shunted off to disk. However, when it comes to garbage collection, the whole of the heap tends to get scanned, causing all those seldom used pages to be paged into real memory, with other pages needing to be moved out to the disk to make space for those old pages. And this is a vicious cycle, because the pages that have just been moved to disk are themselves likely to be seldom used pages in the heap, which the garbage collector is just about to scan as part of the garbage collection. The result is that you will spend more time moving pages in and out of memory than you will getting any useful work done.

Garbage collection is often an application bottleneck already. But if you make the heap so large that the operating system must page significantly in order for the JVM to perform a garbage collection, the result is a cascade of very slow paging activity, which will slow your application down to a crawl. So make sure that the maximum heap size is smaller than the available system RAM, taking into account other processes that may also need to be running at the same time, to prevent this paging disaster.


About the authors
Jack Shirazi is the Director of and author of Java Performance Tuning (O'Reilly). Jack was an early adopter of Java, and for the last few years has consulted primarily for the financial sector, focusing on Java performance. Contact Jack at

Kirk Pepperdine is the Chief Technical Officer at and has been focused on Object technologies and performance tuning for the last 15 years. Kirk is a co-author of the book ANT Developer's Handbook. Contact Kirk at

60 KBe-mail it!

What do you think of this document?
Killer! (5) Good stuff (4) So-so; not bad (3) Needs work (2) Lame! (1)


developerWorks > Java technology
  About IBM  |  Privacy  |  Terms of use  |  Contact