com.xuggle.ferry
Enum JNIMemoryManager.MemoryModel

Package class diagram package JNIMemoryManager.MemoryModel
java.lang.Object
  extended by java.lang.Enum<JNIMemoryManager.MemoryModel>
      extended by com.xuggle.ferry.JNIMemoryManager.MemoryModel
All Implemented Interfaces:
Serializable, Comparable<JNIMemoryManager.MemoryModel>
Enclosing class:
JNIMemoryManager

public static enum JNIMemoryManager.MemoryModel
extends Enum<JNIMemoryManager.MemoryModel>

The different types of native memory allocation models Ferry supports.

Memory Model Performance Implications

Choosing the JNIMemoryManager.MemoryModel you use in Ferry libraries can have a big effect. Some models emphasize code that will work "as you expect" (Robustness), but sacrifice some execution speed to make that happen. Other models value speed first, and assume you know what you're doing and can manage your own memory.

In our experience the set of people who need robust software is larger than the set of people who need the (small) speed price paid, and so we default to the most robust model.

Also in our experience, the set of people who really should just use the robust model, but instead think they need speed is much larger than the set of people who actually know what they're doing with java memory management, so please, we strongly recommend you start with a robust model and only change the JNIMemoryManager.MemoryModel if your performance testing shows you need speed. Don't say we didn't warn you.

Model Robustness Speed
JAVA_STANDARD_HEAP (default) +++++ +
JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION +++ ++
NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION +++ +++
JAVA_DIRECT_BUFFERS (not recommended) + ++++
NATIVE_BUFFERS + +++++

What is "Robustness"?

Ferry objects have to allocate native memory to do their job -- it's the reason for Ferry's existence. And native memory management is very different than Java memory management (for example, native C++ code doesn't have a garbage collector). To make things easier for our Java friends, Ferry tries to make Ferry objects look like Java objects.

Which leads us to robustness. The more of these criteria we can hit with a JNIMemoryManager.MemoryModel the more robust it is.

  1. Allocation: Calls to make() must correctly allocate memory that can be accessed from native or Java code and calls to delete() must release that memory immediately.
  2. Collection: Objects no longer referenced in Java should have their underlying native memory released in a timely fashion.
  3. Low Memory: New allocation in low memory conditions should first have the Java garbage collector release any old objects.

What is "Speed"?

Speed is how fast code executes under normal operating conditions. This is more subjective than it sounds, as how do you define normal operation conditions? But in general, we define it as "generally plenty of heap space available"

How Does JNIMemoryManager Work?

Every object that is exposed from native code inherits from RefCounted.

Ferry works by implementing a reference-counted memory management scheme in native code that is then manipulated from Java so you don't have to (usually) think about when to release native memory. Every time an object is created in native memory it has its reference count incremented by one; and everywhere inside the code we take care to release a reference when we're done.

This maps nicely to the Java model of memory management, but with the benefit that Java does all the releasing behind the scenes. When you pass an object from Native code to Java, Ferry makes sure it has a reference count incremented, and then when the Java Virtual Machine collects the instance, Ferry automatically decrements the reference it in native code.

In fact, in theory all you need to do is make a finalize() method on the Java object that decrements the reference count in the native code and everyone goes home happy.

So far so good, but it brings up a big problem:

Now, aren't you sorry you asked. Here's the good news; The RefCounted implementation solves all these problems for you.

How you ask:

The end result: you usually don't need to worry.

In the event you need to manage memory more expicitly, every Ferry object has a "copyReference()" method that will create a new Java object that points to the same underlying native object.

And In the unlikely event you want to control EXACTLY when a native object is released, each Ferry object has a RefCounted.delete() method that you can use. Once you call "delete()", you must ENSURE your object is never referenced again from that Java object -- Ferry tries to help you avoid crashes if you accidentally use an object after deletion but on this but we cannot offer 100% protection (specifically if another thread is accessing that object EXACTLY when you RefCounted.delete() it). If you don't call RefCounted.delete(), we will call it at some point in the future, but you can't depend on when (and depending on the JNIMemoryManager.MemoryModel you are using, we may not be able to do it promptly).

What does all of this mean?

Well, it means if you're first writing code, don't worry about this. If you're instead trying to optimize for performance, first measure where your problems are, and if fingers are pointing at allocation in Ferry then start trying different models.

But before you switch models, be sure to read the caveats and restrictions on each of the non JAVA_STANDARD_HEAP models, and make sure you have a good understanding of how Java Garbage Collection works.


Enum Constant Summary
JAVA_DIRECT_BUFFERS
          Large memory blocks are allocated as Direct ByteBuffer objects (as returned from ByteBuffer.allocateDirect(int)).
JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION
          Large memory blocks are allocated as Direct ByteBuffer objects (as returned from ByteBuffer.allocateDirect(int)), but the Java standard-heap is informed of the allocation by also attempting to quickly allocate (and release) a buffer of the same size on the standard heap..
JAVA_STANDARD_HEAP
           Large memory blocks are allocated in Java byte[] arrays, and passed back into native code.
NATIVE_BUFFERS
          Large memory blocks are allocated in native memory, completely bypassing the Java heap.
NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION
          Large memory blocks are allocated in native memory, completely bypassing the Java heap, but Java is informed of the allocation by briefly creating (and immediately releasing) a Java standard heap byte[] array of the same size.
 
Method Summary
 int getNativeValue()
          Get the native value to pass to native code
static JNIMemoryManager.MemoryModel valueOf(String name)
          Returns the enum constant of this type with the specified name.
static JNIMemoryManager.MemoryModel[] values()
          Returns an array containing the constants of this enum type, in the order they're declared.
 
Methods inherited from class java.lang.Enum
clone, compareTo, equals, getDeclaringClass, hashCode, name, ordinal, toString, valueOf
 
Methods inherited from class java.lang.Object
finalize, getClass, notify, notifyAll, wait, wait, wait
 

Enum Constant Detail

JAVA_STANDARD_HEAP

public static final JNIMemoryManager.MemoryModel JAVA_STANDARD_HEAP

Large memory blocks are allocated in Java byte[] arrays, and passed back into native code. Releasing of underlying native resources happens behind the scenes with no management required on the programmer's part.

Speed

This is the slowest model available.

The main decrease in speed occurs for medium-life-span objects. Short life-span objects (objects that die during the life-span of an incremental collection) are relatively efficient. Once an object makes it into the Tenured generation in Java, then unnecessary copying stops until the next full collection.

However while in the Eden generation but surviving between incremental collections, large native buffers may get copied many times unnecessarily. This copying can have a significant performance impact.

Robustness

  1. Allocation: Works as expected.
  2. Collection: Released either when delete() is called, the item is marked for collection, or we're in Low Memory conditions and the item is unused.
  3. Low Memory: Very strong. In this model Java always knows exactly how much native heap space is being used, and can trigger collections at the right time.

Tuning Tips

When using this model, these tips may increase performance, although in some situations, may instead decrease your performance. Always measure.


JAVA_DIRECT_BUFFERS

public static final JNIMemoryManager.MemoryModel JAVA_DIRECT_BUFFERS
Large memory blocks are allocated as Direct ByteBuffer objects (as returned from ByteBuffer.allocateDirect(int)).

This model is not recommended. It is faster than JAVA_STANDARD_HEAP, but because of how Sun implements direct buffers, it works poorly in low memory conditions. This model has all the caveats of the NATIVE_BUFFERS model, but allocation is slightly slower.

Speed

This is the 2nd fastest model available. In tests it is generally 20-30% faster than the JAVA_STANDARD_HEAP model.

It is using Java to allocate direct memory, which is slightly slower than using NATIVE_BUFFERS, but much faster than using the JAVA_STANDARD_HEAP model.

The downside is that for high-performance applications, you may need to explicitly manage RefCounted object life-cycles with RefCounted.delete() to ensure direct memory is released in a timely manner.

Robustness

  1. Allocation: Weak. Java controls allocations of direct memory from a separate heap (yet another one), and has an additional tuning option to set that. By default on most JVMs, this heap size is set to 64mb which is very low for video processing (queue up 100 images and see what we mean).
  2. Collection: Released either when delete() is called, or when the item is marked for collection
  3. Low Memory: Weak. In this model Java knows how much direct memory it has allocated, but it does not use the size of the Direct Heap to influence when it collects the normal non-direct Java Heap -- and our allocation scheme depends on normal Java Heap collection. Therefore it can fail to run collections in a timely manner because it thinks the standard heap has plenty of space to grow. This may cause failures.

Tuning Tips

When using this model, these tips may increase performance, although in some situations, may instead decrease performance. Always measure.


JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION

public static final JNIMemoryManager.MemoryModel JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION
Large memory blocks are allocated as Direct ByteBuffer objects (as returned from ByteBuffer.allocateDirect(int)), but the Java standard-heap is informed of the allocation by also attempting to quickly allocate (and release) a buffer of the same size on the standard heap..

This model can work well if your application is mostly single-threaded, and your Ferry application is doing most of the memory allocation in your program. The trick of informing Java will put pressure on the JVM to collect appropriately, but by not keeping the references we avoid unnecessary copying for objects that survive collections.

This heuristic is not failsafe though, and can still lead to collections not occurring at the right time for some applications.

It is similar to the NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION model and in general we recommend that model over this one.

Speed

This model trades off some robustness for some speed. In tests it is generally 10-20% faster than the JAVA_STANDARD_HEAP model.

It is worth testing as a way of avoiding the explicit memory management needed to effectively use the JAVA_DIRECT_BUFFERS model. However, the heuristic used is not fool-proof, and therefore may sometimes lead to unnecessary collection or OutOfMemoryError because Java didn't collect unused references in the standard heap in time (and hence did not release underlying native references).

Robustness

  1. Allocation: Good. Java controls allocations of direct memory from a separate heap (yet another one), and has an additional tuning option to set that. By default on most JVMs, this heap size is set to 64mb which is very low for video processing (queue up 100 images and see what we mean). With this option though we inform Java of the allocation in the Direct heap, and this will often encourage Java to collect memory on a more timely basis.
  2. Collection: Good. Released either when delete() is called, or when the item is marked for collection. Collections happen more frequently than under the JAVA_DIRECT_BUFFERS model due to informing the standard heap at allocation time.
  3. Low Memory: Good. Especially for mostly single-threaded applications, the collection pressure introduced on allocation will lead to more timely collections to avoid OutOfMemoryError errors on the Direct heap.

Tuning Tips

When using this model, these tips may increase performance, although in some situations, may instead decrease performance. Always measure.


NATIVE_BUFFERS

public static final JNIMemoryManager.MemoryModel NATIVE_BUFFERS
Large memory blocks are allocated in native memory, completely bypassing the Java heap.

It is much faster than the JAVA_STANDARD_HEAP, but much less robust.

Speed

This is the fastest model available. In tests it is generally 30-40% faster than the JAVA_STANDARD_HEAP model.

It is using the native operating system to allocate direct memory, which is slightly faster than using JAVA_DIRECT_BUFFERS, and much faster than using the JAVA_STANDARD_HEAP model.

The downside is that for high-performance applications, you may need to explicitly manage RefCounted object life-cycles with RefCounted.delete() to ensure native memory is released in a timely manner.

Robustness

  1. Allocation: Weak. Allocations using make and releasing objects with RefCounted.delete() works like normal, but because Java has no idea of how much space is actually allocated in native memory, it may not collect RefCounted objects as quickly as you need it to (it will eventually collect and free all references though).
  2. Collection: Released either when delete() is called, or when the item is marked for collection
  3. Low Memory: Weak. In this model Java has no idea how much native memory is allocated, and therefore does not use that knowledge in its determination of when to collect. This can lead to RefCounted objects you created surviving longer than you want to, and therefore not releasing native memory in a timely fashion.

Tuning Tips

When using this model, these tips may increase performance, although in some situations, may instead decrease performance. Always measure.


NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION

public static final JNIMemoryManager.MemoryModel NATIVE_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION
Large memory blocks are allocated in native memory, completely bypassing the Java heap, but Java is informed of the allocation by briefly creating (and immediately releasing) a Java standard heap byte[] array of the same size.

It is faster than the JAVA_STANDARD_HEAP, but less robust.

This model can work well if your application is mostly single-threaded, and your Ferry application is doing most of the memory allocation in your program. The trick of informing Java will put pressure on the JVM to collect appropriately, but by not keeping the references to the byte[] array we temporarily allocate, we avoid unnecessary copying for objects that survive collections.

This heuristic is not failsafe though, and can still lead to collections not occurring at the right time for some applications.

It is similar to the JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION model.

Speed

In tests this model is generally 25-30% faster than the JAVA_STANDARD_HEAP model.

It is using the native operating system to allocate direct memory, which is slightly faster than using JAVA_DIRECT_BUFFERS_WITH_STANDARD_HEAP_NOTIFICATION, and much faster than using the JAVA_STANDARD_HEAP model.

It is worth testing as a way of avoiding the explicit memory management needed to effectively use the NATIVE_BUFFERS model. However, the heuristic used is not fool-proof, and therefore may sometimes lead to unnecessary collection or OutOfMemoryError because Java didn't collect unused references in the standard heap in time (and hence did not release underlying native references).

Robustness

  1. Allocation: Good. With this option we allocate large, long-lived memory from the native heap, but we inform Java of the allocation in the Direct heap, and this will often encourage Java to collect memory on a more timely basis.
  2. Collection: Good. Released either when delete() is called, or when the item is marked for collection. Collections happen more frequently than under the NATIVE_BUFFERS model due to informing the standard heap at allocation time.
  3. Low Memory: Good. Especially for mostly single-threaded applications, the collection pressure introduced on allocation will lead to more timely collections to avoid OutOfMemoryError errors on the native heap.

Tuning Tips

When using this model, these tips may increase performance, although in some situations, may instead decrease performance. Always measure.

Method Detail

values

public static final JNIMemoryManager.MemoryModel[] values()
Returns an array containing the constants of this enum type, in the order they're declared. This method may be used to iterate over the constants as follows:
for(JNIMemoryManager.MemoryModel c : JNIMemoryManager.MemoryModel.values())
        System.out.println(c);

Returns:
an array containing the constants of this enum type, in the order they're declared

valueOf

public static JNIMemoryManager.MemoryModel valueOf(String name)
Returns the enum constant of this type with the specified name. The string must match exactly an identifier used to declare an enum constant in this type. (Extraneous whitespace characters are not permitted.)

Parameters:
name - the name of the enum constant to be returned.
Returns:
the enum constant with the specified name
Throws:
IllegalArgumentException - if this enum type has no constant with the specified name

getNativeValue

public int getNativeValue()
Get the native value to pass to native code

Returns:
a value.


Copyright © 2008, 2010 Xuggle