Java performance concerns on data types and CIM sblim client

274 Views Asked by At

I have a question/concern regarding implementation styles used by some Java frameworks.

First, as a general background, I know from hands-on that creating arrays of Objects are way much more expensive than creating raw/primitive array types. For example, if you ask some layer below for a large chunk of character data, the most efficient choice is using types like char[], byte[], ByteArray, rather than inventing wrapper data-types and returning array of objects.

However, I have seen frameworks out there which seem to implement their own data types, and expect arrays of those types to be passed back and forth. This incurs all of the mentioned performance and cost issues, and is really inconvenient due to the conversion code overhead.

For example, consider the types defined in sblim client's javax.cim package:

http://sblim.sourceforge.net/cim-client2-v22-doc/javax/cim/package-summary.html

There's one type named UnsignedInteger8 which is basically a byte. There are other types which would also have equivalent efficient primitive types. There's a comment about DMTF data types which seem to indicate they are encapsulating the different data-types in Java classes in order to comply with the protocol, but at which cost?

So, to get to the point, I want to know if someone more experienced with Java performance and framework design in general can provide some feedback on whether sblim client is an efficient implementation or not. Is there a rationale or justification for using this wrapper types like UnsignedInteger8 when they could simply have used a byte or char? Do creating standardized and uniform code for handling all DMTF data types justify the lack in performance that implies using array of objects instead of primitive types? Am I getting the point right or wrong here?

Thanks in advance!

UPDATE

To further show my point, consider this excerpt:

static StringBuilder convertToText (UnsignedInteger8[] u8arr) {
    if (u8arr== null) return null;
    StringBuilder sb = new StringBuilder(u8arr.length);
    for (UnsignedInteger8 u8 : u8arr) {
               sb.append((char)u8.shortValue());
    }
    return sb;
}

I would have to use code like that to map a chunk of character data to a StringBuilder. This is really inefficient, as the data is already loaded in-memory by CIM. Why should I need to create a new StringBuilder and iterate/copy over all data again? Why can't CIM client just provide a String, byte[], char[], or ByteBuffer? I believe this is very inefficient and I would like to understand why in this world would they implemented it this way.

I understand the advantages of abstracting data types and using facilities provided by Java, such as Comparable, Equals, the Collections API, and so forth. This makes sense and I believe is worth using for complex structured data, such as Person, Account, Transaction, etc. but not for raw/primitive data types. It is just too much.

0

There are 0 best solutions below