The container cut-off accounts for not only metaspace, but also native
memory footprint such as thread stack, code cache, compressed class space.
If you run streaming jobs with rocksdb state backend, it also accounts for
the rocksdb memory usage.
The consequence of less cut-off depends on your environment and workloads.
For standalone clusters, the cut-off will not take any effect. For
containerized environments, depending on Yarn/Mesos configurations your
container may or may not get killed due to exceeding the container memory.
Thank you~
Xintong Song
On Tue, Mar 31, 2020 at 5:34 PM LakeShen <
[hidden email]> wrote:
> Hi community,
>
> Now I am optimizing the flink 1.6 task memory configuration. I see the
> source code, at first, the flink task config the cut-off memory, cut-off
> memory = Math.max(600,containerized.heap-cutoff-ratio * TaskManager
> Memory), containerized.heap-cutoff-ratio default value is 0.25. For
> example, if TaskManager Memory is 4G, cut-off memory is 1 G.
>
> However, I set the taskmanager's gc.log, I find the metaspace only used
> 60 MB. I personally feel that the memory configuration of cut-off is a
> little too large. Can this cut-off memory configuration be reduced, like
> making the containerized.heap-cutoff-ratio be 0.15.
> Is there any problem for this config?
>
> I am looking forward to your reply.
>
> Best wishes,
> LakeShen
>