Over the last 40 years, networking innovation has focused largely on data-plane performance and new protocol introduction. Adding new network services often requires installing middle-boxes. SDN and new automation tools intended to simplify network service creation and improve network responsiveness to application requests, but at the expense of increasing layers of complexity and divergent solutions.
Meanwhile, compute instances have become instantly available and at ever-finer levels of granularity (think VMs, containers, microservices) whenever an application requests. The communication domain, however, is still unable to provide adequate networking services with this same agility, velocity, and security.
This unaligned progress between computation and communication substantially hinders and obstructs the transition to hybrid clouds, microservices, IoT, and other cutting-edge technologies.
Within the networking domain, the introduction of virtualized network functions (VNFs) brought the legacy configuration-centric model to the world of virtual machines and containers. However, this approach has failed to adapt to the agility, flexibility, and security required for modern application services distributed over private, hybrid and multi-clouds.
Within the compute domain, sidecar data proxies have been developed that integrate VNFs automatically into each workload offering control over connectivity without centralized network configuration. This approach entirely disconnects network services from the underlying network infrastructure, making it largely inefficient and relatively unsecure.
Container sidecar VNFs solve one problem but create another.
Fencing off VNFs into two domains burdens the compute domain and blinds the networking domain. The compute domain now suffers from lower performance with no top-down policy enforcement. Likewise, inefficient operation within the networking domain makes segmentation and service chaining difficult and lacks bottom-up traceability.
VNFs & ENFs¶
In the network domain, VNFs providing new services are supposed to be introduced through a complex and time-consuming process of developing new packet header fields, new header parsing rules, and new match-action tables. All these changes must be propagated to every network device and management system, touching everything from low-level silicon to high-level software protocols. Somewhere, packets, originated by workloads, have to be intercepted, classified, and new headers created, attached or removed.
Bayware’s Ephemeral Network Functions, operating in the compute domain, address this problem. Now, network functions, written with Java-like syntax, are compiled into RISC-V microcode. That microcode is made available to the workload through the Microservice Controller. The workload–be it a container, VM, or bare metal–downloads and embeds the microcode into data flows. RISC-V based network processors, network microservice engines deployed throughout the network, realize the requested service by executing each packet’s embedded microcode and instructing the forwarding plane.
Ephemeral Network Functions, originated by workloads and executed by NMEs, provide the compute domain with pervasive security and better performance and give the networking domain more visibility, higher efficiency, and agile segmentation and service chaining.
No Device Configuration¶
Rather than trying to keep up with the agility and velocity of compute instance creation through centralized configuration, the Bayware platform delivers communication services through decentralized application-side programming.
Bayware’s secure, credentialed Network Microservice Controller replaces traditional network controllers providing a radically simplified mechanism of agile connectivity that can easily keep up with the endpoint chaos inherent with VMs and containers.
The Network Microservice Engine (NME) allows an application to instantly deploy any communication service–connectivity and policy–eliminating middle-boxes: firewalls, load balancers, CDNs, etc.
Bayware’s patented technology makes communication service instantiation unprecedentedly programmable, agile, and secure.
Instead of the traditional, circuitous process of introducing new fields in packet headers, new rules of for header parsing, new actions for tables, propagating all these changes via one–or more–proprietary network element management systems to every network device in the infrastructure, the Bayware solution allows one to program the requested services; compile them into general-purpose microcode that is ready for insertion into packet headers; and make this code available to business applications via the Network Microservice Controller.
Afterward, business applications can download and embed microprograms into their data flows. Bayware-enabled packets are then tunneled to a Bayware overlay network–with zero impact on existing infrastructure and services.
Microprograms embedded in an application’s own data flow tell the Bayware NMEs how to deliver a communication service. This eliminates the need to run complex routing protocols everywhere in the network or installing specialized middle-boxes every time a new service is requested.
Bayware’s centralized Network Microservice Controller means that services like micro-segmentation, multicast, load balancing, and content distribution are available to VMs and containers for immediate download just like mobile apps for smartphones. And simple software programming gives end users the power to create microprograms so that their applications carry custom communication services.
This Controller-centric approach is so efficient that Enterprises and Service Providers can literally create and deploy new network services with roughly 10% of the time and effort that it takes today using current technologies.
When an application has control over its own communication services, those services become instantly available to each workload at the moment they are needed. Now when VMs and containers are created and deleted following the demand of the customer, the associated communication services are as well.
Slow, complex, centralized configuration–whether through SDN or other automation services–is no longer required.
Moving security policy controls into the data flows themselves strengthens security measures. This prevents the possibility or running potentially harmful code from packets or obtaining access to network resources that are not authorized.
Bayware has included a security framework in its solution that provides the cryptographic mechanism of signing the code in the Network Microservice Controller – much like the Apple Store does today for its apps–and the subsequent verification of these signatures by each Bayware network node before the code is fed to the processor.
Signature verification is used only for the first packet in the flow to establish flow identity. Following packets in the same flow are subjected to a lighter-weight cryptographic verification. Additionally, every host (server, VM, container) must be authenticated by the Network Microservice Controller before sending or receiving packets or even requesting microcode.
Security at this granularity ensures that no business application transmits data or receives data that is not intended by the enterprise security policy.
The following four components compose Bayware’s platform:
- Network Microservice Controller
- One or more servers that ensure trust and store contract templates.
- Network Microservice Engine
- A network entity that processes Bayware flows.
- Network Microservice Agent
- A software driver running on each workload that connects the workload to a Bayware network.
- Network Microservice SDK
- A toolkit for creating microprograms that run on Bayware engines.
Use case: Micro-segmentation¶
One obvious example of where Bayware’s solution can bring immediate relief to customers is micro-segmentation. Micro-segmentation sub-divides everything business apps do into specific flows between application workloads. In Bayware’s architecture, customers receive policies and encryption at the flow level. This allows customers to phase out traditional routers, switches, and load balancers and their associated controllers and policy engines.
Use case: Hybrid clouds¶
Sensitive data requires consistent security policy enforcement while traveling through multiple clouds. Because Bayware-enabled data flows are uniquely subject to unified ID management, authentication and authorization at initiation and carry this information with them as they transit networks, there is no requirement to preconfigure distant routers, switches and firewalls in other clouds or to worry about the security measures that may–or may not–be available in public clouds. Bayware flows are completely isolated from these concerns and corporate data security policies remain in lock step regardless of the number of public/private clouds and data centers in the flows’ path.
Customers replace a divergent solution with a more unified, less expensive and far more cloud- and container-adaptive approach from Bayware. With Bayware, customer’s networking policies become part of micro-segment-based applications-flow architecture across private and public clouds as well as across physical, VM and container endpoints.
There are myriad compelling advantages in providing IT managers with this new type of agile connectivity tool, including:
- Faster service delivery to their clients via instant provisioning of connectivity in complex, multi-cloud environments;
- Slashing OPEX and expensive headcount by eliminating all intermediate–and, now unnecessary–proprietary layers of network management/automation systems;
- Significantly reducing both CAPEX and OPEX by shifting special packet treatment from purpose-built and pre-programmed network appliances (Firewall, Load balancer, ADCs, etc.) to simple microprograms injected into the packets themselves;
- Eliminating the necessity to configure VLANs, program firewalls and maintain complex ACLs by authorizing and authenticating flows that carry their own path information through the network.