‘Serverless’ computing: a technology trigger
The public cloud is witnessing key innovations in the area of native/managed services and appears like a beginning of a long line-up of services to land on the tarmac. The key approach taken by cloud vendors is to democratise software and pretty much aligns with the predicted technology triggers we witness from time to time. ‘Serverless’ is one such stream, which promises to redefine the way systems are conceptualised and designed. Is serverless computing the future? Time will tell, but at this stage it is truly a technology trigger.
Is innovation at its peak? I would say a big ‘yes’. There is so much happening and it is an exciting time to be in tech to witness all the technology triggers. Driverless cars, artificial intelligence, digital assistants to new form factors. So many technology triggers, that will make their mark in one way or the other. Time will tell how many of these will become mainstream in the near or far future. In the current realm of digital transformation, ‘customer experience’ is the next frontier, where the bulk of the experiences are being thought through. Technology innovations are powering new experiences, and the public cloud is truly operating as a foundation to make it real.
The public cloud is becoming mainstream and startups/enterprises are setting foot in this new paradigm. We have seen a decade or two of pureplay hardware investments driving solution hosting, followed by virtualisation playing a key role in optimising hosting environment. At this juncture, the industry is expecting innovation beyond virtualisation and an area where leading public cloud vendors are innovating. In fact, at this stage, ‘managed service’ and ‘platform as a service’ (PaaS) provides a clear path in taking solution hosting to the next level. However, the acme of innovation as we see it now is probably serverless computing.
We should look at ‘compute’ in cloud under the hood of what we term the new architecture pattern. ‘SOA’ as an architecture pattern was the most used or overtly used term of the last decade. In fact, it was used at every single opportunity irrespective of its fitment or relevance from a solution standpoint. While we take the journey forward, a pattern that is making news and rightly so is ‘microservices’. While there exist a lot of similarities between the way SOA and microservices architectures are perceived on the ground, microservices by far brings greater clarity in how services are constructed. A topic for another day, but the foundation for applications that get built in future would definitely fall in the construct of independent, small, modular services that can run as a unique service and integrated through lightweight protocols. Simply put—microservices.
Serverless computing is an operating environment for code execution where infrastructure provisioning and runtime are managed by the cloud provider. The scale and performance are baked into the model and hence it is an environment where one can rest easy and focus purely on the functionality. Serverless falls in the category of the so-called ‘backend-as-a-service’ model, and this is where it differs from pureplay PaaS, where you still need to define a compute boundary.
Serverless is the most recent addition to the list of topics discussed in the architecture world. The topic gets significant attention at major conferences and there is enough material being published on the subject. The concept is refreshing, innovative, and highlights the real value of leveraging unlimited compute and elastic nature of the public cloud. While serverless as a concept is simple, below is a quick summary of the key benefits and anticipated concern areas at this stage.
Independence from compute
In general, the approach adopted by many is to leverage raw compute ahead of any native services provided by a cloud vendor. Two factors contribute to this approach. The first is the notion of staying in control of one’s compute just as they are used in on-premise hosting, and the second one is to avoid a vendor lock-in if native services are used. Beyond a point, managing compute may become an overhead but more than this, certain services which are niche and innovative are offered as managed services. Similar to a ‘build-vs-buy’ decision one takes, it is a waste of time, money, and energy to build capability that already is made available as a service. Going forward, attaining freedom from managing compute coupled with the need to address ‘speed to market’ will be key to innovating in the cloud, and this is where serverless will score.
Real ‘pay as you go’
We always talk about the concept of pay per use and it gets real with serverless. Consider a scenario where you have spun off a few VMs which host web workloads. Now in the current compute models available, one ends up paying for every single minute and hour the VM remains in ‘running’ state irrespective of whether the website gets any hits or not. While there is certain innovation in place to accrue idle time as CPU credits, this concept is yet to be prevalent. In fact, the very economics of leveraging compute in the public cloud will dramatically come down if this model becomes prevalent. Serverless is absolutely in this direction, where one pays for resource consumption and execution, meaning one pays for the execution time and number of times the program/function runs—again, a key tenet of the serverless computing model.
Scale beyond boundary
As we all know, its elastic nature is one of the main motivations to adopt the cloud. Despite having access to unlimited compute, often we operate with a certain boundary in view of the anticipated load. Setting up infrastructure for scale is definitely a big deal and this is an area where administrators spend the bulk of their time. Many customers I have met have outright indicated that economics play a critical role while addressing scale. No one sets things for scale without an upper limit, and the limit is purely on the basis of planned capacity and anticipated load. Planning compute for a constant throughput despite increased load is an activity in itself, and access to a plethora of services in the cloud does give one the luxury to play around with various solutions. This is where serverless addresses the very need to deliver consistent performance during times of peak load. Keeping the compute warm behind the scenes to auto scale and giving a runtime to host is the foundation of serverless compute.
Architectural fixes
Choosing the serverless route demands a relook at the architecture building blocks and this is where cloud architects play a crucial role. It is also a question of price and performance and an area where decisions impact the overall outcome. Components which are disparate and can operate in isolation are perfect candidates for serverless, and identifying and stitching them back into the overall architecture boundary requires code change. The extent of change depends on many factors, but this move can be a boon to address performance and scale, which otherwise are dependent on the compute capacity of the chosen machine.
State management
Technologies backing serverless today are evolving and continue to be inclusive of more and more languages over time. The gap today, as we see it at the periphery, is in terms of the stateless nature of the operating environment. You might have a piece of code which operates with some context/state in isolation, but when you move the same block to a serverless environment, the intelligence is lost. I am sure technologists are working to set a great foundation as we go along to the whole concept of serverless, but a clear indication of how compute in future will be perceived and consumed is needed.
It is truly an interesting time, with a lot of architecture patterns emerging to meet the real demand on the ground. In the public cloud, it is always a challenge to strike a balance between cost and performance, and a challenge likely to continue for a long time. The reality is that there is more than one solution to a problem in the public cloud and each carries a price tag. What option you choose is a decision you make keeping in mind many factors which are private to you and your organisation. In future, serverless might address the real balance where you are looking at resource consumption and execution time ahead of self-provisioning compute. For now, it is truly a technology trigger and a promising stream to follow.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)