With containers, microservices, and FaaS becoming even more integrated into hybrid and cloud environments, you must remain diligent to make sure you’re navigating these aspects of the modern app development landscape successfully.
We all know this can be daunting—it’s an ephemeral world out there, but you can optimize operating containerized workloads by leveraging service mesh, and delve into specific use cases and best practices for leveraging FaaS.
Meshing Around: Enhancing Container and Microservice Management through Service Mesh
DevOps pros have recently started leveraging containers in production, and operating containerized workloads in complex ways. As this becomes more common, so do container orchestrators. Tech pros have even begun moving beyond container orchestrators and into a new layer of tooling called service mesh. Instead of attempting to overcome distributed systems concerns by writing infrastructure logic into application code, tech pros can manage these challenges with a service mesh: this layer of tooling helps ensure that the responsibility of service management is centralized, avoiding redundant instrumentation, and making observability ubiquitous and uniform across services.
A service mesh can also be used with microservices to provide better management
and control, as well as insight into how those services are intercommunicating. The volume of services that must be managed on an individual, distributed basis with microservices (versus centrally for a monolith) creates challenges for ensuring reliability, observability, and security of these services, and implementing service mesh can help address some of these challenges. Adding this new layer creates the potential to implement robust and scalable applications with granular control over microservices.
At first, the idea of running a function to perform a task and only paying for the execution time needed to run that task sounds appealing. Unfortunately, this pricing model becomes expensive if it’s executing many functions, or running a specific function millions of times. With that in mind, you should consider serverless when a workload is:
- easy to parallelize into independent units of work
- infrequent or with sporadic demand
- with large, unpredictable variance in scaling requirements
- without a major need for instantaneous cold start time, or
- highly dynamic in terms of changing business requirements that drive a need for accelerated developer velocity
Some examples include:
- Executing logic in response to database changes (insert, update, trigger, delete)
- Performing analytics on IoT sensor input messages, for example, as Message Queuing Telemetry Transport (MQTT) messages
- Handling stream processing (analyzing or modifying data in motion)
- Managing single time extract, transform, and load jobs that require a great deal of processing for a short time
- Providing cognitive computing via a chat bot interface (asynchronous, but correlated)
- Scheduling tasks performed for a short time (e.g. cron or batch style invocations)
- Serving machine learning and AI models (retrieving one or more data elements such as tables or images and matching against a pre-learned data model to identify text, faces, anomalies, etc.)
- Continuous integration pipelines that provision resources for build jobs on-demand, instead of keeping a pool of build slave hosts waiting for jobs to be dispatched
Universal Tips for Navigating an Ephemeral World
Each architecture in the modern app development landscape has its own quirks, so knowing how and when to optimize and leverage the three is key. As a tech pro, you can apply four universally applicable best practices to container, microservice, and FaaS running, deployment, and implementation, including:
- Prioritize Observability – When writing an application for containers, microservices, or FaaS, it’s important to make the application observable to expose key metrics about the performance of your application.
- Adopt Monitoring Tools – Containers, microservices, and FaaS pose different application development patterns than tech pros traditionally encounter, so the right tooling is not always available. However, it’s crucial to adopt monitoring and debugging tools that can support these application development patterns, to help ensure success in deployment and running workloads.
- Application Design – A function will come and go, a container will come and go, and applications must be designed to support this lifecycle. For functions specifically, tech pros can run into issues such as incorrect logic, and end up having functions fall into a vicious cycle of calling each other, billing spikes, and generally not working effectively.
- Know Your Use Case – Understand the needs of your hybrid or cloud environment before making the move to containers, microservices, or FaaS.
Keep these best practices in mind and remain diligent to ensure you’re navigating these aspects of the modern application development landscape, and you’ll be equipped with the right tools to survive and thrive in the modern app development landscape.