i want develop a project using flink stack in a project using a custom
distributed system, so id like use my distrubuted system as resource manager instead to overload the project with many other additional sockets and code. Is there a way for embedding flink project in my server without using other external resources manager. For example a way for setting a ResourceManagerInterface i dont know.... |
Hi Cristian,
I think you could have a try on standalone cluster on X(underlying cluster framework). By now it works well for kubernetes[1]. If your custom distributed system support to start an application based on yaml/json configuration, then it will be very simple to start a session cluster. For job cluster, you need to build your own image with user jar and all dependencies included and then set the command of jobmanager to standalone-job.sh. [1] https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/deployment/kubernetes.html Best, Yang Cristian Lorenzetto <[hidden email]> 于2019年8月14日周三 下午6:01写道: > i want develop a project using flink stack in a project using a custom > distributed system, so id like use my distrubuted system as resource > manager instead to overload the project with many other additional sockets > and code. > > Is there a way for embedding flink project in my server without using other > external resources manager. For example a way for setting a > ResourceManagerInterface i dont know.... > |
Hi Cristian,
For the latest off-the-shelf Flink releases, I agree with Yang that running a standalone cluster on top of your custom distributed system is AFAIK the only way. Customizing Flink and implement your own Flink ResourceManager is also an option. Flink ResourceManager is a component that handles interactions with external distributed systems, and we implemented differently for standalone, Yarn, Mesos, and potentially K8s in future. Unfortunately, this is not as easy as just implementing a set of interfaces at the moment, and may need some knowledge about Flink deployment. In general, you need to also touch the client for submitting jobs to your custom distributed systems, cluster entry point to bring the Flink cluster up on your system and specify using your customized ResourceManager, and task manager runner to bring up task managers on your custom system. You can refer to the flink-yarn module in the source codes for details. Thank you~ Xintong Song On Wed, Aug 14, 2019 at 1:43 PM Yang Wang <[hidden email]> wrote: > Hi Cristian, > > I think you could have a try on standalone cluster on X(underlying cluster > framework). By now it works well for kubernetes[1]. > If your custom distributed system support to start an application based on > yaml/json configuration, then it will be very simple to start a session > cluster. For job cluster, you need to build your own image with user jar > and all dependencies included and then set the command of jobmanager to > standalone-job.sh. > > > [1] > > https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/deployment/kubernetes.html > > Best, > Yang > > Cristian Lorenzetto <[hidden email]> 于2019年8月14日周三 > 下午6:01写道: > > > i want develop a project using flink stack in a project using a custom > > distributed system, so id like use my distrubuted system as resource > > manager instead to overload the project with many other additional > sockets > > and code. > > > > Is there a way for embedding flink project in my server without using > other > > external resources manager. For example a way for setting a > > ResourceManagerInterface i dont know.... > > > |
Free forum by Nabble | Edit this page |