I have a master- worker configuration slave on a centos Vapp with 6 executors on each slave .It has 16 CPU and 27 GB RAM. Java test cases are run on top of it which takes around 50 min to completely run all test cases . I was restricting the heap space inside each slave container with MaxRamFraction set to 27 . Also I have a cron job running for every 10 min to clean the cache data .
There is some certain case like 1/25 where the master container is exiting and upon checking the debug logs the below stack trace is found
kernel: 20349 total pagecache pages kernel: 17331 pages in swap cache kernel: Swap cache stats: add 1219511, delete 1202180, find 10194505/10306288 kernel: Free swap = 0kB kernel: Total swap = 2097148kB kernel: 7077758 pages RAM kernel: 0 pages HighMem/MovableOnly kernel: 131463 pages reserved
I see that the swap memory is 0kB so it this causing the containers to exit .If yes how do I resolve this ? Do I need to increase the swap memory in the yml file ? Will that work ?
Is it the best practise or do I need to clean swap space everytime the test suite finishes execution ? How to do that ?
Any help is appreciated
Source: Docker Questions