Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set a max degree of parallelism #1207

Closed
sjkp opened this issue Feb 16, 2017 · 21 comments
Closed

Set a max degree of parallelism #1207

sjkp opened this issue Feb 16, 2017 · 21 comments

Comments

@sjkp
Copy link

sjkp commented Feb 16, 2017

Currently you can by using the host.json file control the max degree of parallelism that a single function app runs with. But the runtime will in the case of a long queue start to spin up more function instances, it would be great if you could set an upper limit to how many function instances you want to scale out to.

There are many cases where it is not be preferable to try to empty the queues as fast as possible, but rather at a steady pace so other systems can keep up.

@lindydonna
Copy link
Contributor

Could have an app setting that the central listener uses to decide how many instances to scale to.

@lindydonna lindydonna added this to the Next - Triaged milestone Feb 23, 2017
@sjkp
Copy link
Author

sjkp commented Feb 24, 2017

Sounds good @lindydonna that would solve my immediate issue.

But if you want to think ahead, I might be nice to also add some sort of call back that could control when to scale or stop scaling. So that we could write our own scaling logic. I'm thinking something along the lines of the rampUpRules for traffic management in normal azure web apps. https://docs.microsoft.com/en-us/rest/api/appservice/webapps

@jofultz
Copy link

jofultz commented Feb 28, 2017

+1 on this. @lindydonna what you describe would be a good stop-gap and provide headroom to work on something more like a callback control mechanism described by @sjkp .

In my case, I want to control the scale in accordance with throughput limits on DocumentDB. Thus, I'd like to only limit scale as I approach my throughput limits. This could be true with any number of integrated external resources: hubs, 3rd party webhooks, messaging limits (e.g., Twilio, SendGrid, etc.), processing limits (e.g., image processing services).

@ghost
Copy link

ghost commented Mar 2, 2017

It would be great if this could be set on a per app basis in the host.json (max parallel global invocations) and a per function basis via function.json (max parallel instance invocations).

@davidebbo davidebbo modified the milestones: Next - Triaged, Stability Mar 6, 2017
@davidebbo davidebbo modified the milestones: April 2017, Stability Mar 21, 2017
@paulbatum paulbatum modified the milestones: April 2017, May 2017 May 1, 2017
@npiasecki
Copy link

+1 on this. I'm talking to an API that rate limits me to 2 calls per second, and I queue up a lot of work (read a list of work from a CSV file and add them to a queue), so I want to burn down the queue slowly. Setting the batchSize to 1 works until the queue gets large and Functions decides to spin up more instances. It took me a few false starts to figure out how to work around this, and the best I came up with was manipulating the initialVisibilityDelay when adding the batch of messages to the queue to space them out generously so Functions won't try to execute them concurrently. That works in this case but sometimes you really do want a central throttle, just one worker churning slowly in the background no matter how big the queue gets because you can't go any faster.

@paulbatum paulbatum modified the milestones: May 2017, June 2017 Jun 20, 2017
@paulbatum
Copy link
Member

The new WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT setting can help with this. Its not 100% bulletproof but in a majority of cases it will achieve the desired goal of limiting concurrency.

https://github.com/projectkudu/kudu/wiki/Configurable-settings#limit-the-scaling-of-function-apps

@paulbatum paulbatum modified the milestones: Next, June 2017 Jul 12, 2017
@paulbatum
Copy link
Member

Leaving this issue open as WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT only gets us part of the way there.

@ishepherd
Copy link

ishepherd commented Jul 3, 2018

Ping. Any plans to work further on this e.g. the ideas from @jofultz or @sjkp ?
We have a production incident at the moment apparently caused by the enthusiastic scaling of consumption plan for a lower-priority background task, enough that it's overwhelming our main Azure Storage account.

Edit: also, any plans to graduate the WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT out of preview?

@paulbatum
Copy link
Member

Sorry, at this time I have no updates to share on this.

@PatrikNorrgard
Copy link

Hi, any news on this?

@tkholmes
Copy link

tkholmes commented Oct 18, 2018

The new WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT setting can help with this. Its not 100% bulletproof but in a majority of cases it will achieve the desired goal of limiting concurrency.

https://github.com/projectkudu/kudu/wiki/Configurable-settings#limit-the-scaling-of-function-apps

Can someone help me understand why this is only a tenable solution some of the time? We're running under the consumption model (but are also willing to run on a single instance under a regular App Service Plan). Under what circumstances will this setting NOT keep concurrency to the value set in this setting? (e.g. WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT=1)

Thanks in advance!

@ishepherd
Copy link

@tkholmes There is now a very tiny amount of additional info on docs.microsoft.com

This setting is ... only reliable if set to a value <= 5

@reisenberger
Copy link

For others landing here: Linking a recent comment from @cgillum on another thread:

We have plans to replace this app setting [ie WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT] with something more reliable and settable through portal and/or ARM. No specifics to share yet though other than we hope to have it available for Consumption plans sometime this calendar year.

@SashaPshenychniy
Copy link

Hopefully Microsoft can make something more granular than global settings, currently available (e.g. WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT or maxConcurrentActivityFunctions or queues.batchSize). If I have few different functions, hosted in single Functions App, it is very unlikely all of those functions have same concurrency limits and interfere concurrency of one another.

Thinking further, it is certainly possible some functions interfere others (e.g. you know there is long-running task, blocking SQL table), and many other small update-operations on that table which can execute in parallel themselves, and you don't want hundreds of those small tasks take SQL connections and wait on table lock while long-running one is executing - you'd rather prevent them from starting until long-running one completes. That would be a great feature, especially if some AI automatically detects correlations between task execution time, their arguments, and adjust global subscription-execution plan accordingly... Probably I'm asking too much, but who knows =).

@paulbatum
Copy link
Member

@SashaPshenychniy more granular control is discussed in a few other issues:
Azure/azure-webjobs-sdk#1680
#511

This is a capability we would really like to add to functions. Its mostly a question of priority compared to everything else in our backlog. But since this issue is about how many instances of your function app run, I would suggest moving any further discussion about granular control to one of the issues I linked.

@loomchild
Copy link

Side note, maybe will be useful for someone: for timer-based functions, singleton is enforced by default: https://stackoverflow.com/a/53919048/4619705

In my particular case it's sufficient, since I can launch a function every minute to consume specific amount of events depending on desired throughput. Even if the function takes longer than 1 minutes, another one won't be executed.

@joeyeng
Copy link

joeyeng commented Jul 31, 2020

Hey @cgillum, just wondering if there's an update on this since this was over a year ago and it sounded like something was planned to come out last year. Thanks

For others landing here: Linking a recent comment from @cgillum on another thread:

We have plans to replace this app setting [ie WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT] with something more reliable and settable through portal and/or ARM. No specifics to share yet though other than we hope to have it available for Consumption plans sometime this calendar year.

@cgillum
Copy link
Contributor

cgillum commented Jul 31, 2020

Wow, time flies! Yes, we have an update: the new ARM setting should be available in production now for a limited set of regions but I don’t believe we’ve documented it yet or deployed the portal UX. The hard parts are done so you should be able to expect an announcement about its availability soon’ish.

@SamNutkins
Copy link

@cgillum - any where specifically we can stay tuned for an update on this?

@cgillum
Copy link
Contributor

cgillum commented Aug 13, 2020

@jeffhollan I assume there will be an official announcement at some point. Any specific recommendations for where customers should look for this?

@jeffhollan
Copy link

Yes just got this working and confirmed with @wenhzha - if you run this command you can specify the maximum amount of instances you can scale to (some value between 1 and 200). Docs should be merging soon

az resource update --resource-type Microsoft.Web/sites -g <resource_group> -n <function_app_name>/config/web --set properties.functionAppScaleLimit=<scale_limit>

I realize there are more specific flavors of this we could implement but for the sake of tracking I'm going to close this as this does cover initial scenarios and would encourage folks to create new issues for more granular control

@Azure Azure locked as resolved and limited conversation to collaborators Sep 17, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests