We are running a Java application in a Kubernetes cluster. The application itself doesn’t have high demand for RAM, but I’ve noticed that it always consumes 1GB.
kubectl top pods NAME CPU(cores) MEMORY(bytes) my-application-c0ffee 100m 1127Mi my-application-c0ffee 100m 1109Mi
When I checked
jcmd <pid> GC.heap_info inside the container, I got the following:
def new generation total 89216K, used 12090K [0x00000000bc200000, 0x00000000c22c0000, 0x00000000d2c00000) ... tenured generation total 197620K, used 151615K [0x00000000d2c00000, 0x00000000decfd000, 0x0000000100000000) ... Metaspace used 146466K, capacity 152184K, committed 152576K, reserved 1183744K class space used 18171K, capacity 19099K, committed 19200K, reserved 1048576K
As I understood, by default Java reserves 1GB of virtual memory size for storing Class information (in order to reference it in a compressed manner using 32 bit references this memory block should be reserved beforehand). When running outside of a container that’s not a big deal, because this memory is not actually committed. It’s only an address space that is reserved.
But it seems to be completely different situation in case of running inside container, where reserved memory becomes committed.
Does this mean that Java running in a container will by default consume at least 1GB of RAM?
And is there any other way to deal with that besides explicitly setting
Source: Docker Questions