Journey from cPanel to Serverless: Thoughts on the Future of Full Stack Development and Deployment
Six years ago when I started as a developer, I used shared hosting with cPanel to deploy my client's websites. Although it was an average experience, it was very cost-effective. Some of my old sites are still hosted in shared hosting services. Those sites use PHP/WordPress and receive barely 5,000 to 10,000 requests per month. I also used Cloudflare DNS on those sites and tried to leverage their free services.
However, as I progressed my journey as a developer I began to work with more complex projects for bigger companies. I discovered that shared hosting is not the thing that is typically used in the industry. Therefore, my focus and ideas began to shift towards using servers of AWS, GCP, and DigitalOcean (DO). Different platforms have different names for their server infrastructure. AWS calls it EC2 while DO calls it Droplets. Working with these types of rented servers helped me develop a deeper understanding of cloud computing and its benefits, such as increased security, redundancy, and global accessibility. Additionally, it also gave me the opportunity to learn a lot about virtual machines, NGINX, reverse proxy, forward proxy, SSL certificates, TCP connection, Linux, and more.
Likewise, as I continued to expand my knowledge of server infrastructure, I began to explore more advanced tools such as Docker and Kubernetes that could further enhance my career as a developer. The concept of containerization, building images of your software through Linux scripts on top of another image (which could be an Operating System) was intriguing to me. Deployment to different environments was fast, and writing CI/CD pipeline scripts to seamlessly integrate multiple Git branches to deploy full-stack applications was fascinating. In certain projects, I leveraged tools like Helm and Kubernetes to orchestrate containers and deploy applications to a managed Kubernetes service. Other times, I explored services like AWS Fargate, Beanstalk, App Runner, and even Heroku to deploy our containers. The idea of containerizing images for deployment across any managed server was truly captivating.
However, then began the time when I started to look at serverless and serverless services. I had been familiar with serverless function-as-a-tool such as Lambda since 2018 but back then I did not fully grasp their significance and how they could transform the world of full-stack development and deployment. Serverless provides several advantages such as auto-scaling, extremely quick deployments, no server management overhead, and so on. These all are extremely beneficial, but the most important benefit is the pay-per-use policy, which means the payment should only be done for the actually used resource.
For instance, if you have an API that gets 100,000 requests per month, then in serverless you only have to pay for those 100,000 requests and the CPU your function/API used while executing the function. And this is all super cheap. For example, a million requests in an AWS Lambda function will cost you ~$0.20. This is advantageous over serverful application because in the serverful architecture when you buy a resource server of, let's say, 4 GB RAM and 2vCPU to run your application, then even when your application has very little to low traffic, you still need to pay for your full resource which is 4 GB RAM and 2vCPU. Moreover, if your site has heavy usage that the resource cannot handle, then your server could potentially crash and restart. While in serverless, this is handled by the serverless provider itself. The execution manner of serverless functions also makes your application not to be harmed or crashed by any server or functional issues.
In recent years, there has been a lot of innovation in the serverless industry. Many tech companies and developers around the world have tried to integrate serverless in their stack. Many cloud service providers, such as Vercel, the creator of a popular full-stack framework NextJS, use serverless architecture to run API routes. Today, you can run and deploy your serverless REST API using frameworks such as HonoJS and SST. In fact, with SST you can even deploy your full-stack applications. Likewise, not only serverless functions but today, various other serverless services have also gained popularity in the DevOps market. These services include serverless databases, serverless edge functions, serverless key-value stores, and serverless authentication.
Serverless databases are charged based on usage rather than configured resources, just like serverless functions. PlanetScale, Neon, and AWS DynamoDB are some serverless database providers that charge based on usage and support an easy auto-scaling feature.
Serverless authentication is another example of a serverless service that has become increasingly popular. Services such as Auth0, Firebase, and AWS Cognito offer a serverless authentication service that provides secure authentication and authorization for your application without having to manage your own authentication server. These services include authentication features such as multi-factor authentication, passwordless authentication, social login, and user management.
On the other hand, serverless edge functions are the type of functions that runs on multiple distributed machine worldwide, making them faster than AWS Lambda functions, which are served from only one location. These types of distributed edge-function are handy when you want your functions to be executed quickly by the user. Cloudflare Workers, AWS Lambda Edge, and Deno Deploy are popular serverless edge function providers. And running this is super cheap as well. If you are interested in this then I highly recommend watching this video of Ryan Dahl (creator of NodeJS and Deno), where he discussed his idea of a dream stack using serverless edge functions.
While edge functions can be useful in some cases, they may not be the best option if you have a single database instance for your application. This is because the function would eventually have to make a round trip to the single database instance that could be running anywhere in the whole world. This has been discussed in the video below by Theo on his channel. Do check out if you are interested.
However, there are also some ways developers can avoid doing a round trip to a database from the serverless edge function. Developers can consider using distributed serverless databases or distributed cache stores such as Cloudflare KV. These services can at least make the reading operation faster as these databases tend to have a copy of data in all the edge regions worldwide.
Serverless Limitation: Cold Start Issue
Serverless architecture has revolutionized the way we build and deploy applications. However, as with any technology, there are also some drawbacks that need to be considered while using serverless. One of the major concerns with serverless is the cold start issue. This has been discussed many times on the internet.
Basically, whenever somebody accesses your serverless function such as Lambda, the function needs to download, compile, and build the code before executing it. These steps are a cold start, and they can add a few milliseconds to a few seconds of extra time (depending on the code) to run the function. If you are running a large Java code that requires extra compilation time, it can take even longer to start your function.
While serverless edge functions like Deno and Cloudflare claim to have little to no cold start time, there are other limitations to consider, such as the difficulty of establishing TLS connections in the edge.
Despite these limitations, the benefits of serverless architecture far outweigh the cons. By carefully weighing the pros and cons, you can determine if serverless is the right choice for your next full-stack application.
Conclusion
If you have read through this entire article, then I truly appreciate your time and attention. Over the years, there has been a lot of innovation in full-stack development and hosting, and I have learned a lot about web hosting and website management. My journey from shared hosting to serverless has been extremely fruitful.
In the end, the ideal deployment stack depends on your specific requirements. I think sticking to shared hosting with cPanel is still a good idea for a simple PHP/WordPress project that does not have high traffic. But when your application needs scalability and security and when the developers need more control then it’s definitely better to think through the need of serverless or serverful. Serverless is a great idea and it will stay for a long long time. If you can figure out ways to reduce cold-start for your APIs or functions then serverless is going to save you a significant amount of money in the long run. It will also remove the extra headache of scalability as serverless tend to support scalability out of the box.
If you found this information useful, please consider sharing it with friends or colleagues who may also benefit from it. If you have any questions or would like to discuss the topic further, you can reach out to me on Twitter at twitter.com/aabiscodes or LinkedIn at linkedin.com/in/aabis7.