azagrebin commented on a change in pull request #328: URL: https://github.com/apache/flink-web/pull/328#discussion_r411995827 ########## File path: _posts/2020-04-17-memory-management-improvements-flink-1.10.md ########## @@ -0,0 +1,87 @@ +--- +layout: post +title: "Memory Management Improvements with Apache Flink 1.10" +date: 2020-04-17T12:00:00.000Z +authors: +- andrey: + name: "Andrey Zagrebin" +categories: news +excerpt: This post discusses the recent changes to the memory model of the Task Managers and configuration options for your Flink applications in Flink 1.10. +--- + +Apache Flink 1.10 comes with significant changes to the memory model of the Task Managers and configuration options for your Flink applications. These recently-introduced changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), providing strict control over its memory consumption. In this post, we describe Flink’s memory model, as it stands in Flink 1.10, how to set up and manage memory consumption of your Flink applications and the recent changes the community implemented in the latest Apache Flink release. + +## Introduction to Flink’s memory model + +Having a clear understanding of Apache Flink’s memory model allows you to manage resources for the various workloads more efficiently. The following diagram illustrates the main memory components in Flink: + +<center> +<img src="{{ site.baseurl }}/img/blog/2020-04-17-memory-management-improvements-flink-1.10/total-process-memory.svg" width="400px" alt="Flink: Total Process Memory"/> +<br/> +<i><small>Flink: Total Process Memory</small></i> +</center> +<br/> + +The Task Manager process is a JVM process. On a high level, its memory consists of the *JVM Heap* and *Off-Heap* memory. These types of memory are consumed by Flink directly or by JVM for its specific purposes (i.e. metaspace etc.). There are two major memory consumers within Flink: the user code of job operator tasks and the framework itself consuming memory for internal data structures, network buffers, etc. + +**Please note that** the user code has direct access to all memory types: *JVM Heap, Direct* and *Native memory*. Therefore, Flink cannot really control its allocation and usage. There are however two types of Off-Heap memory which are consumed by tasks and controlled explicitly by Flink: + +- *Managed Memory* (Off-Heap) +- *Network Buffers* + +The latter is part of the *JVM Direct Memory*, allocated for user record data exchange between operator tasks. + +## How to set up Flink memory + +With the latest release of Flink 1.10 and in order to provide better user experience, the framework comes with both high-level and fine-grained tuning of memory components. There are essentially three alternatives to setting up memory in Task Managers. + +The first two — and simplest — alternatives are configuring one of the two following options for total memory available for the JVM process of the Task Manager: + +- *Total Process Memory*: total memory consumed by the Flink Java application (including user code) and by the JVM to run the whole process. +- *Total Flink Memory*: only memory consumed by the Flink Java application, including user code but excluding memory allocated by JVM to run it + +It is advisable to configure the *Total Flink Memory* for standalone deployments where explicitly declaring how much memory is given to Flink is a common practice, while the outer *JVM overhead* is of little interest. For the cases of deploying Flink in containerized environments (such as [Kubernetes](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/kubernetes.html), [Yarn](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/yarn_setup.html) or [Mesos](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/mesos.html)), the *Total Process Memory* option is recommended instead, because it becomes the size for the total memory of the requested container. Containerized environments usually strictly enforce this memory limit. + +If you want more fine-grained control over the size of *JVM Heap* and *Managed Memory* (Off-Heap), there is also a second alternative to configure both *[Task Heap](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#task-operator-heap-memory)* and *[Managed Memory](https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html#managed-memory)*. This alternative gives a clear separation between the heap memory and any other memory types. + +In line with the community’s efforts to [unify batch and stream processing](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html), this model works universally for both scenarios. It allows sharing the *JVM Heap* memory between the user code of operator tasks in any workload and the heap state backend in stream processing scenarios. In a similar way, the *Managed Off-Heap Memory* can be used for batch spilling and for the RocksDB state backend in streaming. Review comment: ```suggestion In line with the community’s efforts to [unify batch and stream processing](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html), this model works universally for both scenarios. It allows sharing the *JVM Heap* memory between the user code of operator tasks in any workload and the heap state backend in stream processing scenarios. In a similar way, the *Managed Memory* can be used for batch spilling and for the RocksDB state backend in streaming. ``` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [hidden email] |
Free forum by Nabble | Edit this page |