Recently, I've been giving some thought to the roadblocks I discover when I attempt network automation and orchestration, and how those roadblocks affect the software-defined data center idea as a whole. As a result, I have a few suggestions for how to prepare a network – and the thought process – for the future.
Thou shalt document thy assets
Know what you’ve got, or go home.
It’s important to keep a complete and accurate record of all your network devices, ideally including not just hardware types and serial numbers, but other useful information, such as location, software version, licenses, and current configurations. It’s also important to have a full understanding of the
network topology, so physical and logical mappings are helpful to have. The good news is that existing network management systems can pull most or all of this information together for you, and it can be extracted for use by other tools, if necessary. What you do not want is a list of hardware in a notepad and topology information stored only in Visio®; the data needs to be online and usable.
Without full visibility of your network, it’s going to be difficult to automate so much as a VLAN deployment.
Thou shalt know thy applications
It’s not what, it’s WHO.
I believe we need to stop thinking about network flows in terms of IP addressing and start thinking about applications. Applications allow us to define network policies without getting bogged down in the details. This has a few major benefits:
- The rules are understandable, even when you look at them six months later.
- Adding or removing servers in an application role doesn’t change the policy; nor does changing a server IP.
- Decommissioning a service is easy, because policies for that service can be clearly located.
Need to add an IPS device? No problem. All you have to do is update the
All IPS Devices group and the policy we have defined is unchanged; only the implementation detail has changed. Add in a QoS definition, and the same data can now feed into an SD-WAN solution for prioritization purposes as well as to configure the LAN switches and routers.
This approach is ripe for automation. It’s not too complex to achieve (famous last words), and I would suggest that’s exactly what should – and will – start to happen more widely.
Thou shalt befriend the server and storage teams
…despite what you may think of them.
I get it. The server team has been putting Network Admin on their business cards for years, even though they wouldn’t know an IP from a P.I., and people who understand SANs are, well, just weird. Nonetheless, DevOps has a significant head start on NetOps, and if we’re going to catch up we need to make friends with the people who have been playing in this particular sandbox for a number of years now.
Additionally, while it’s currently cool for the network team to experiment with automation, we’re still missing the point if we think that we can run an efficient and agile network while retaining the traditional storage, server, and network silos. The most functional orchestration requires a single conductor directing players working in harmony; three smaller orchestras trying to play at the same time just sounds like a
Harrison Birtwhistle recital.
Thou shalt not commit code
The best code is no code at all.
I realize that this goes against the grain for some, but I’ve maintained for a while that most network teams don’t want to code, and won’t need to code. If you’re a small to medium enterprise, you probably don’t have the dev resources to seriously code for your network. Instead, the number of open source/off-the-shelf solutions that do the work for you will increase and improve in features and reliability over time, bringing SDN benefits without needing to program (except perhaps to customize a little). In reality, all of us likely will experiment with automating existing tasks, but controlling the network is currently best left to companies who specialize in that task. I suspect it will be this way long into the future, as well.
All of the above applies doubly if you are considering writing code to manipulate
OpenFlow® table entries.
Thou shalt abstract thy designs from thy implementation
Bananarama lied to us in song.
“It ain’t what you do, it’s the way that you do it,” is almost the exact opposite of the way we should be thinking about our networks. We tend to worry about the configuration commands needed to attain a desired end state, but instead, we should be able to define the desired end state and not worry about the implementation details. Abstracting configuration, for example, allows standard configurations (SNMP) to be created in a generic fashion without worrying about the commands necessary to accomplish this on the end platform. Let a tool push it out to the entire network device population (a population we know because we followed the first commandment).
Tail-f Systems, prior to its acquisition by Cisco®, had been working on a product to deploy a configuration written in single style (Junos® or IOS®) to mixed endpoints from various vendors. More recently
OpenConfig is seeking to create a data model that allows the creation of declarative configurations that can be implemented across various platforms. In the server world, you don’t see server admins worrying about the command to install Apache® on Ubuntu™ versus CentOS™ do you? No; they let Puppet Labs® do the hard work of figuring out how to implement the requirement. Conceptually, wouldn’t it be great if the entire network were built like this, independent from the underlying hardware? Can you imagine being able to swap out a Cisco router for a Juniper® router and have the management system deploy a functionally equivalent configuration to the new router automatically?
This isn’t going to happen next week or possibly even next year, but, again, it’s a good mindset to have as we move forward.
Final thoughts
The network is becoming a resource in the same way that storage, CPU, and RAM are just things we buy and deploy as needed. Providing on-demand instantiation of policies (and thus configuration) is important if we’re to integrate fully with the server and storage teams to achieve full end-to-end orchestration. Rather than pushing us toward programming, I believe that the complexity of these orchestration solutions will drive most companies toward purchased products or well-supported open source solutions that can do clever things on our behalf.
Finally, tomorrow's network will work best with a proper understanding of the applications involved, so feeding our policies and network inventory into that system will be essential.