NRE-Bytes

Morsels of NRE Wisdom

Building User Interfaces for Network Automation

Exploring ways to implement user interfaces for network automation frameworks

The importance of User Interfaces for Network Automation One of the least talked about aspects of network automation workflows is the ability to interface with the different tools and frameworks that enable these workflows. Consider a scenario where you are part of a small team of Network Engineers in a small company that has a small but growing customer base. You realize the need to automate your device and link provisioning workflows and hack up a few Python scripts (or Ansible playbooks if you prefer) that get the job done in the beginning.

So you want to GNMI ?

Practical aspects to consider when using GNMI for Network Telemetry

GNMI for Network Telemetry For those readers who build and manage Network Telemetry stacks, GNMI (GRPC Network Management Interface) is probably one of the hottest topics being discussed right now, and for good reason. The industry has relied far too long on vendor specific mechanisms for obtaining telemetry data from network devices. Its been a viscious cycle of vendors heavily promoting their own telemetry stacks thereby increasing the dependency on those stacks while further de-incentivizing them to work on anything collaborative and standardized for the benefit of end customers.

Turbocharge your Jinja2 Templates

Optimize your network configuration templating using Jinja2 tweaks

Jinja2 Templating for Network Configuration Management A critical part of Network Automation is Network Configuration management, which involves creating tooling or frameworks for maintaining, modifying, verifying and pushing configurations to network devices. In order to scale in a multi-vendor network environment, these configurations are often maintained as templates that are then rendered using variables ( usually YAML files or Python dictionaries ) according to specific business logic. Jinja2 is a Python based templating engine commonly used for this purpose.

Building an Edge Traffic Controller - Part 2

Practical demo of using the controller to detour traffic from overloaded interfaces

In Part 1 of this series, we ran through the technical details of what it would take to build an edge traffic controller to steer traffic away from overloaded edge links. In this blog, I will try to demonstrate the controller in action by simulating traffic flows using our virtualized topology. We will also look into other real world considerations such as operational monitoring and metrics. Initial Setup To recap, this is what our topology looks like:

Building an Edge Traffic Controller - Part 1

A Proof of Concept implementation of a Software Defined Edge traffic controller using Sflow and GoBGP

2017 was the year of the Software Defined Network (SDN). Apart from other things like new players jumping onto the SDN space and a bunch of new SD-WAN offerings, two prominent innovation leaders - Google and Facebook - both released blogs and papers on their software defined edge network. Google’s solution; named Espresso, is likely a more battle tested and production-hardened solution owning to years of R&D and testing. It is, however (or at least in my opinion) a lot more complex than Facebook’s Edge Fabric that uses a much simpler approach to solving the same problem - which is to overcome BGP’s inability to take link performance ( which translates to application performance) into account for its routing decisions.