I have read enough papers on serverless cold start, but have not found a clear explanation on what causes cold start. Could you try to explain it from both commercial and open-source platform's points of view?
- commercial platform such as AWS Lambda or Azure Funtion. I know they are more like a black-box to us
- There are open-source platforms such as OpenFaaS, Knative, or OpenWhisk. Do those platforms also have a cold start issue?
My initial understanding about cold start latency is time spent on spinning up a container. After the container being up, it can be reused if not being killed yet, so there is a warm start. Is this understanding really true? I have tried to run a container locally from the image, no matter how large the image is, the latency is near to none.
So maybe it is the time spent on k8s scheduling?
Is the image download time also part of cold start? But no matter how many cold starts happened in one node, only one image download is needed, so this seems to make no sense.
Maybe a different question, I also wonder what happened when we instantiate a container from the image? Are the executable and its dependent libraries (e.g., Python library) copied from disk into memory during this stage? What if there are multiple containers based on the same image? I guess there should be multiple copies from disk to memory because each container is an independent process.
https://stackoverflow.com/questions/67326936/what-causes-cold-start-in-serverless April 30, 2021 at 09:07AM
没有评论:
发表评论