Memory overhead in java

What actually memory overhead is in java?

Sum: 8+4+len*8 bytes For a JVM with compressed pointers: 4+4+len*4 bytes Then the actual objects that you store (references to) in the array will consume memory depending on what kind of objects they are. Object only contains a pointer to the class, so 8 bytes, or 4 bytes if using compressed pointers.

What actually memory overhead is in java?

I have read what-is-the-memory-consumption-of-an-object-in-java and what-is-the-memory-overhead-of-an-object-in-java.

But still I am in confusion.

  • What is memory overhead?? is it the padding ?
  • What is JVM with compressed pointers? is it reference ??
  • If 32-bit JVM is used then overhead will be less? Ofcourse yes.But is it because of padding?
  • So is it better to use always 32-bit JVM for memory efficiency or for performance?

Below image is from this link(page no.26)

In this image at starting itself they shown as 16 bytes JVM overhead ,why that so??

When more memory is used than the fields you created.

Some is padding which can appear anywhere in the object, except the header which is always at the start. The header is typically 8-12 bytes long.

What is JVM with compressed pointers?

A technique for using 32-bit pointers in a 64-bit JVM to save memory.

References can use this technique but so can pointers to the class information for an object.

If 32-bit JVM is used then overhead will be less?

Possibly, though this is the same as using compressed pointers for references and classes.

But is it because of padding?

It’s because 64-bit pointers use more space than 32-bit pointer.

So is it better to use always 32-bit JVM for memory efficiency or for performance?

No, the 32-bit processor model has 32-bit registers where as the 64-bit model has twice as many registers which are double the sized (64-bit) means far more can be held in the fastest memory, the registers. 64-bit calculations tend to be faster as well with a 64-bit processing model.

In general I would recommend you always use the 64-bit JVM unless you a) can’t or b) have a very small amount of memory.

In this image at starting itself they shown as 16 bytes JVM overhead,why that so??

This is not strictly correct. This assumes you have a non compressed class reference so the header is 12-bytes, however objects are 8 byte aligned by default, which means there will be 4 bytes of padding at the end (which totals 16 bytes but it’s not all at the start)

FAQ: Why can 32-bit Compressed OOP address more than 4 GB

Object have to be 8-byte aligned by default. This makes memory management easier but wastes some padding sometimes. A side effect is that the address of every object will have 000 for the lowest three bits (it has to be a multiple of 8) Those bits don’t need to be stored. This allows a compressed oops to address 8 * 4 GB or 32 GB.

With a 16 byte object alignment the JVM can address 64 GB with 32-bit reference (however the padding overhead is higher and might not be worth it)

IFAQ: Why is it slower at around 28 — 32 GB

While the reference can be multiplied by 8, the heap doesn’t start at the start of memory. It typically start around 4 GB after. This means that if you want the full 32 GB you have to add this offset, which has a slight overhead.

The @Override Annotation in Java, The @Override annotation is a standard Java annotation that was first introduced in Java 1.5. The @Override annotation denotes that the child …

Источник

Measuring memory overhead in Java

I have spent a lot of time recently measuring Plumbr’s overhead and figuring out some ways of reducing it. When I started the measurements, I was frustrated to the extent that I had to let my feelings out in a form of a previous blog post. But now I am back in my constructive mode and ready to share more insights on the subject.

How do you measure memory overhead of a Java agent? The answer, as seen below, seems trivial at first:

“You measure memory consumption of the application with that agent and without it, then compare”.

As it turns out, even the “measure” part is not that obvious, not to speak of “comparing”. Let me explain. We are using our own performance monitor to judge how well Plumbr fairs. It runs as a separate thread in JVM and periodically measures the amount of memory consumed by application. Here is an example output for one of our test application:

Memory Graph

We are using java.lang.Runtime to get the information from the JVM. In the picture we have lined up different metrics:

  • JVM heap’s capacity is measured by Runtime.totalMemory()
  • Used memory is calculated using Runtime.freeMemory()
  • Native memory usage is reported from OS’s /proc/self/status.

As we can see, there are 3 different memory related metrics. And seemingly the most important of them – used memory amount – looks more like a cardiologist’s nightmare. So it seems not the right source of information to base your overhead calculations upon.

Did you know that 20% of Java applications have memory leaks? Don’t kill your application – instead find and fix leaks with Plumbr in minutes.

We needed a better idea. And as usual it was truly simple – we turned our eyes into monitoring the size of live data instead. Live data denotes objects that are still alive after full garbage collection cycle. This effectively serves as the lowest bound of the heap required for the application. You can’t get away with less. How much heap is occupied by application’s live data can be easily read from GC log: find a line corresponding to Full GC and read it from there. Here is an example:

[Full GC [PSYoungGen: 19377K->0K(217024K)] [PSOldGen: 161109K->152083K(261568K)] 180486K->152083K(478592K) [PSPermGen: 125886K->125886K(258688K)], 1.6528730 secs] [Times: user=1.65 sys=0.01, real=1.66 secs]

If you are unfamiliar with the GC logs then lets see what are the useful bits in it:

  • PSYoungGen: 19377K->0K(217024K) indicates that the objects in young generation were fully collected. Its used size was reduced from 19377KB to 0KB. And the total available size for the young generation is 217024KB
  • PSOldGen and PSPermGen sections contain the same information, but for different areas in memory – correspondingly showing the changes in old and permanent generations
  • The total time the full GC ran was 1.66 seconds.

Now, the live data size is equal to the size of old generation after the full GC. In our case it equals to 152083KB of heap memory. During application’s stable phase this number doesn’t fluctuates very much. So usually you could take either lower convex boundary or average the old generation size over several full GC runs.

What it meant for us was that – we now had a foundation to build the test suites for Plumbr to do the measuring. And we are ready to publish the numbers soon. After I have gotten rid of some obvious overheads we have already found during the measurements.

Enjoyed the post? We have a lot more under our belt. Subscribe to either our RSS feed or Twitter stream and enjoy.

Источник

Measuring Memory Overhead in Java

Join the DZone community and get the full member experience.

I have spent a lot of time recently measuring Plumbr’s overhead and figuring out some ways of reducing it. When I started the measurements, I was frustrated to the extent that I had to let my feelings out in a form of a previous blog post. But now I am back in my constructive mode and ready to share more insights on the subject.

How do you measure memory overhead of a Java agent? The answer, as seen below, seems trivial at first:

“You measure memory consumption of the application with that agent and without it, then compare”.

As it turns out, even the “measure” part is not that obvious, not to speak of “comparing”. Let me explain. We are using our own performance monitor to judge how well Plumbr fairs. It runs as a separate thread in JVM and periodically measures the amount of memory consumed by application. Here is an example output for one of our test application:

Memory Graph

We are using java.lang.Runtime to get the information from the JVM. In the picture we have lined up different metrics:

  • JVM heap’s capacity is measured by Runtime.totalMemory()
  • Used memory is calculated using Runtime.freeMemory()
  • Native memory usage is reported from OS’s /proc/self/status.

As we can see, there are 3 different memory related metrics. And seemingly the most important of them – used memory amount – looks more like a cardiologist’s nightmare. So it seems not the right source of information to base your overhead calculations upon.

We needed a better idea. And as usual it was truly simple – we turned our eyes into monitoring the size of live data instead. Live data denotes objects that are still alive after full garbage collection cycle. This effectively serves as the lowest bound of the heap required for the application. You can’t get away with less. How much heap is occupied by application’s live data can be easily read from GC log: find a line corresponding to Full GC and read it from there. Here is an example:

[Full GC [PSYoungGen: 19377K->0K(217024K)] [PSOldGen: 161109K->152083K(261568K)] 180486K->152083K(478592K) [PSPermGen: 125886K->125886K(258688K)], 1.6528730 secs] [Times: user=1.65 sys=0.01, real=1.66 secs]

If you are unfamiliar with the GC logs then lets see what are the useful bits in it:

  • PSYoungGen: 19377K->0K(217024K) indicates that the objects in young generation were fully collected. Its used size was reduced from 19377KB to 0KB. And the total available size for the young generation is 217024KB
  • PSOldGen and PSPermGen sections contain the same information, but for different areas in memory – correspondingly showing the changes in old and permanent generations
  • The total time the full GC ran was 1.66 seconds.

Now, the live data size is equal to the size of old generation after the full GC. In our case it equals to 152083KB of heap memory. During application’s stable phase this number doesn’t fluctuates very much. So usually you could take either lower convex boundary or average the old generation size over several full GC runs.

What it meant for us was that – we now had a foundation to build the test suites for Plumbr to do the measuring. And we are ready to publish the numbers soon. After I have gotten rid of some obvious overheads we have already found during the measurements.

Memory (storage engine) garbage collection Java (programming language) Overhead (computing) application

Published at DZone with permission of Nikita Salnikov-Tarnovski , DZone MVB . See the original article here.

Opinions expressed by DZone contributors are their own.

Источник

Читайте также:  One page html framework
Оцените статью