The question, or rather questions, when it comes to automation, however, are where and how to take those first steps to what some are calling the next phase of the digital revolution. Automation invariably requires a certain amount of trust — trust that the automation system will perform as expected and trust that your expectations will actually be to your benefit. The last thing anyone wants is to see services grind to a halt or systems shut down because of conflicts in automated processes.
So before the enterprise embarks on this new round of automation, it might be wise to consider the many ways in which it can be done with minimal risk of disruption.
One way to do that is to reconsider the many ways that current infrastructure management inhibits automated processes, or at least makes them less likely to perform at an optimum level. One such source, says AppViewX’s Mark Vondemkamp, is the complex coding that currently governs tasks like provisioning and management. Networking in particular requires dedicated teams of professionals to navigate the relationships between applications and infrastructure, and while automation can certainly help in this regard, it would be easier for everyone if the enterprise converted to low-code processes for workflows and security. With less code to contend with, automated processes will function at greater speed and at higher scale.
Networking can also be difficult to automate if it is not optimized for extreme scale. According to Researchmoz, many of today’s hyperscale data centers were built from the ground up around non-blocking (CLOS) server architectures, which reduces the number of crosspoints in a given pathway to provide a more efficient and scalable ecosystem. This not only reduces the potential for data interruptions, it also provides a more conducive environment for the automated commissioning and decommissioning of virtual servers and other resources on the application layer.
Even within an existing automated framework, onboarding complex processes can be a challenge. Red Hat discovered this with its Ansible automation framework when customers attempted to apply its playbook style of workflow management to high-scale, multi-faceted functions in hybrid cloud settings. This led the company to devise a scheme for “nesting” playbooks within one another, allowing users to split large jobs into groups of smaller jobs, each of which can then be automated and orchestrated to a high degree. At the same time, it allows for repetitive functions to be modularized for reuse across multiple processes, reducing the overall data footprint and leading to more streamlined operations.
Any way you look at it, automation will be a complex endeavor. Many legacy systems and applications were designed for physical-layer automation, and while much of this infrastructure has been virtualized, it still presents challenges when attempting the kind of automation intended to support the emerging services economy. Many organizations are doing an end-run around these limitations by deploying next-gen apps and services on greenfield infrastructure that is purpose-built for the latest automation stacks.
Ideally, of course, the entire IT environment should be automated, but there is always the danger of trying to do too much too quickly. By identifying what they want to automate first, most enterprises should be able to create an implementation plan that is both effective and non-disruptive – even when infrastructure itself must be reconfigured for the new order.
Arthur Cole writes about infrastructure for IT Business Edge. Cole has been covering the high-tech media and computing industries for more than 20 years, having served as editor of TV Technology, Video Technology News, Internet News and Multimedia Weekly. His contributions have appeared in Communications Today and Enterprise Networking Planet and as web content for numerous high-tech clients like TwinStrata and Carpathia. Follow Art on Twitter @acole602.