可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
Is scheduling a lambda function to get called every 20 mins with CloudWatch the best way to get rid of lambda cold start times? (not completely get rid of)...
Will this get pricey or is there something I am missing because I have it set up right now and I think it is working.
Before my cold start time would be like 10 seconds and every subsequent call would complete in like 80 ms. Now every call no matter how frequent is around 80 ms. Is this a good method until say your userbase grows, then you can turn this off?
My second option is just using beanstalk and having a server running 24/7 but that sounds expensive so I don't prefer it.
回答1:
As far as I know this is the only way to keep the function hot right now. It can get pricey only when you have a lot of those functions.
You'd have to calculate yourself how much do you pay for keeping your functions alive considering how many of them do you have, how long does it take to run them each time and how much memory do you need.
But once every 20 minutes is something like 2000 times per month so if you use e.g. 128MB and make them finish under 100ms then you could keep quite a lot of such functions alive at 20 minute intervals and still be under the free tier - it would be 20 seconds per month per function. You don't even need to turn it off after you get a bigger load because it will be irrelevant at this point. Besides you can never be sure to get a uniform load all the time so you might keep your heart beating code active even then.
Though my guess is that since it is so cheap to keep a function alive (especially if you have a special argument that makes them return immediately) and that the difference is so great (10 seconds vs. 80 ms) then pretty much everyone will do it - there is pretty much no excuse not to. In that case I expect Amazon to either fight that practice (by making it hard or more expensive than it currently is - which wouldn't be a smart move) or to make it not needed in the future. If the difference between hot and cold start was 100ms then no one would bother. If it is 10 seconds than everyone needs to work around it.
There would always have to be a difference between running a code that was run a second ago and a code that was run a month ago, because having all of them in RAM and ready to go would waste a lot of resources, but I see no reason why that difference couldn't be made less noticeable or even have few more steps instead of just hot and cold start.
回答2:
You can improve the cold start time by allocating more memory to your Lambda function. With the default 512MB, I am seeing cold start times of 8-10 seconds for functions written in Java. This improves to 2-3 seconds with 1536MB of memory.
Amazon says that it is the CPU allocation that really matters, but there is no way to directly change it. CPU allocation increases proportionately to memory.
And if you want close to zero cold start times, keeping the function warm is the way to go, as described rsp suggested.
回答3:
Among adding more memory for lambda, there is also one more approach to reduce the cold starts: use Graal native-image tool. The jar is translated into byte code. Basically, we would do part of the work, which is done on aws. When you build your code, on loading to AWS - select "Custom runtime", not java8.
Helpful article: https://engineering.opsgenie.com/run-native-java-using-graalvm-in-aws-lambda-with-golang-ba86e27930bf
Beware:
but it also has its limitations; it does not support dynamic class loading, and reflection support is also limited
回答4:
Azure has pre warming solution for serverless instances(Link). This would be a great feature in AWS lambda if and when they implement it.
Instead of user warming the instance at the application level it's handled by the cloud provider in the platform.
回答5:
Hitting server would not resolve case of simultaneous requests by users, or same page sending a few api requests async.
A better solution is to dump the 'warmed-up' into docker checkpoint. It is especially useful for dynamic language when warm up is fast, yet loading of all the libraries is slow.
For details read
https://criu.org/Docker
https://www.imperial.ac.uk/media/imperial-college/faculty-of-engineering/computing/public/1819-ug-projects/StenbomO-Refunction-Eliminating-Serverless-Cold-Starts-Through-Container-Reuse.pdf
Other hints:
- use more memory
- use Python or JavaScript with most basic libraries, try eliminate bulky ones
- create several 'microservices' to reduce chances of several users hitting same service
see more at https://www.jeremydaly.com/15-key-takeaways-from-the-serverless-talk-at-aws-startup-day/
回答6:
Starting from December 2019 AWS Lambda supports Reserved Concurrency (so you can set the number of lambda functions that will be ready and waiting for new calls) [1]
The downside of this, is that you will be charged for the reserved concurrency. If you provision a concurrency of 1, for a lambda with 128MB being active 24 hrs for the whole month, you will be charged: 1 instance x 30 days x 24 hr x 60min x 60sec x (128/1024) = 324,000 GB-sec (almost all of the capacity AWS gives for the lambda free tier) [2]
From above you will get a lambda instance that responds very fast...subsequent concurrent calls may still suffer "cold-start" though.
What is more, you can configure application autoscaling to dynamically manage the provisioned concurrency of your lambda. [3]
Refs:
- https://aws.amazon.com/blogs/aws/new-provisioned-concurrency-for-lambda-functions/
- https://aws.amazon.com/lambda/pricing/
- https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html