The purpose of this guide is to lead you through four steps in creating your service interconnection fabric, such as:
- deploying network and computational resources for your application in public clouds,
- configuring interconnection policy for cloud resources,
- configuring interconnection policy for application services,
- deploying application services on cloud resources.
As a result of following the prescribed procedures, your application services will be secured from each other and the outside world, able to automatically discover each other, tolerant to cloud failures, and easily portable across private and public clouds.
You will achieve this by representing cloud resources and application services–through resource graph and service graph correspondingly–and making your resource and service segmentation policy fully infrastructure-agnostic.
You will deploy your service interconnection fabric using Bayware components as follows:
- Fabric Manager,
- Orchestrator Node,
- Processor Node,
- Workload Node.
You manage your application’s infrastructure via the fabric manager. Included with the fabric manager are two command-line-interface tools: BWCTL and BWCTL-API. The former allows you to easily manage cloud resources and the latter to control resource and application policy. Instead of developing and deploying numerous cloud-specific policies, you create infrastructure-agnostic policy once and copy-paste it across private and public clouds, multiple VPCs in the same public cloud, or various public clouds.
The orchestrator is a unified point for the resource and application policy management. It might be initially deployed as a single node–offering policy controller functionality only–and later enhanced with telemetry and events nodes. Placing all of these components together, you receive a single source for creating and managing all layers of security for your application in multicloud environment as well as in-depth metrics and detailed logs to easily see at a glance the status of your application infrastructure in its entirety.
At a high level, the processor is an infrastructure-as-code component that secures data and control flows between application services in a multicloud environment. The processor plays multiple roles: ssh jump-host, microsegmentation firewall, and inter-VPC VPN gateway among others. However, the most remarkable processor feature is the direct execution of your security policy without requiring any configuration. You can install the processor policy engine with all its dependencies on any VM or physical server and have it serving application traffic in a minute.
Each application workload node–either VM or Kubernetes worker–runs policy agent, a software driver that connects a workload to your service interconnection fabric. The agent manipulates eBPF programs–which process each and every packet coming to and from your application–all at the interface level. Additionally, the agent has an embedded cross-cloud service discovery mechanism to serve DNS and REST requests from the applications located on the node. The agent deployment and autoconfiguration takes just another minute.
Behind the Scene¶
The declarative language of BWCTL and BWCTL-API command-line tools abstracts all the specifics of resource deployment and security policy management in hybrid cloud and multicloud environments. The tools allow you to interact with cloud provider APIs, manage virtual machines and set up security policy. So, a lot of activities happen in the background when you just type, for example,
$ bwctl create processor <vpc-name> $ bwctl-api create link -s <source-processor> -t <target-processor>
Here is a brief outline of what happens behind the scene at each stage of service interconnection fabric deployment.
- Creating fabric
- Setting up certificate authority
- Setting up Terraform state
- Setting up Ansible inventory
- Setting up ssh transport in jump-host configuration
- Creating VPC
- Creating virtual network
- Creating subnets
- Creating security groups
- Deploying orchestrator
- Creating VM with appropriate firewall rules
- Setting up policy controller containers
- Deploying InfluxDB/Grafana for telemetry and ELK for events
- Creating DNS records and certificates
- Deploying processor or workload
- Creating VM with appropriate firewall rules
- Deploying policy engine or agent
- Deploying Telegraf for telemetry and Filebeat for events
- Deploying certificate for mTLS channel with orchestrator
- Setting up and interconnecting security zones
- Assigning processors and workloads to security zones
- Connecting processors
- Setting up IPsec encryption between processors and workloads
- Uploading communication rules and creating service graph
- Installing templates
- Setting up domain for application policy
- Specifying contracts by altering templates
- Assigning application services to contracts
- Deploying application
- Authorizing service endpoints with tokens
- Authorizing application packets at source and destination workloads and all transit processors
- Discovering service endpoints and auto-configuring local DNS
In the next four steps you will create an environment for multicloud application deployment in which the infrastructure is just part of your application code and blends into application CI/CD process. You don’t need to configure networks and clouds in advance in order to deploy application services. And when you move services, the policy follows them. Also, a single source for your multilayered security policy ensures there are no gaps or inconsistency in the application defense.