SIF Deployment

A full Bayware system utilizes an orchestrator, processors, and agents. The processors in your system work together to form a service interconnection fabric between your application microservices. For this tutorial, you will go through the steps to turn four of your VMs into processors: aws-p1 and aws-p2 in AWS; azr-p1 in Azure; and gcp-p1 in GCP. But to be clear–and recalling the importance of policy enforcement in your deployments–installing a service interconnection fabric presents a zero-trust network in which no communication is allowed between your microservices until contracts are explicitly put in place.

Before we begin, let’s use the orchestrator to show that no processors (and no agents) currently exist in your network. To do this, go back to the browser window in which you logged into the orchestrator. In the navigation bar on the left, find the Topology button under the Admin heading as shown in Fig. 21.

Empty Topology in Bayware Orchestrator

Fig. 21 Empty Orchestrator Topology

Click Topology. The pane on the right should be empty. Later, you will see that processors appear as large circles in this pane.

The subsequent commands can be broken up into

  1. Login to a processor VM’s OS
  2. Install Bayware’s engine and Open vSwitch
  3. Configure the engine
  4. Repeat steps 1 - 3 for each processor VM
  5. Create links between processors

Step 1: SSH to VM

Let’s begin with aws-p1. From the command-line prompt on your CCC, login

]$ ssh centos@aws-p1

You will not need a password since a public key has already been installed on aws-p1.

You should now have a prompt on aws-p1 that looks similar to

[centos@aws-p1-382fd7 ~]$

The following commands require super-user privileges, so become root

[centos@aws-p1-382fd7 ~]$ sudo su -

which should give you a prompt like

[root@aws-p1-382fd7 ~]#

The root prompt will be abbreviated in the description below to ]#.

Step 2: Install Bayware’s engine and Open vSwitch

Each processor node in a service interconnection fabric is comprised of two pieces: an engine that determines intent and a data path that moves traffic. The engine, part of Bayware’s core technology, is an Erlang- and C-based application available from the bayware-repo. The data path for this particular implementation utilizes the open-source application, Open vSwitch. The bayware-repo and the epel-release repo, both required for these applications, have been preinstalled on your virtual machines.

Now install the engine and Open vSwitch on aws-p1

]# yum install ib_engine openvswitch -y

Step 3: Configure the engine

Now you will configure the engine so that it becomes visible to the orchestrator. For security purposes, each engine in the network may have its own login credentials. And that’s how this tutorial has been configured. In your SIS page, locate the table with Bayware Processor login credentials (overall, table rows 19 - 22). Note the domain, username, and password associated with the first processor (row 19) as you’ll need it in the next step.

To configure the engine, cd to the directory that contains the configuration script, /opt/ib_engine/bin.

]# cd /opt/ib_engine/bin

The script is called ib_configure. You can display the usage instructions by typing

]# ./ib_configure -h

You will run the script now in interactive mode. The script will prompt you to enter the following information

  • orchestrator IP or FQDN: use c1382fd7.sb.bayware.io as shown in the URL table at the top of the SIS. Your FQDN prefix will be different than c1382fd7 shown here. Do not include the https:// that is present in the URL.
  • node domain: use the domain from login credentials, row 19
  • node username: use the username for this engine from login credentials, row 19
  • node password: use the password for this engine from login credentials, row 19
  • configure IPsec: answer YES

Begin the interactive script now by entering the following at your prompt

]# ./ib_configure -i

After you work your way through the script and it does its magic, the engine will be configured, but it won’t be running. Since the engine is controlled by Linux systemd, you should start and enable it with the following

]# systemctl start ib_engine
]# systemctl enable ib_engine

The aws-p1 node should now be registered as a processor on the orchestrator. To see this, once again go to the orchestrator tab open in your browser and click Topology.

../_images/first_processor.png

Fig. 22 Bayware Processor Installed on aws-11

You should see a green circle with the node name of this processor next to it as shown in Fig. 22. You can also see this registered resource by clicking on the orchestrator’s Resources button located near the top of the left-side navigation bar.

Step 4: Repeat steps 1 - 3 for each processor VM

Now that you have successfully installed one processor, repeat steps 1 through 3 above with the three remaining Proc VMs and processor login credentials listed in your SIS. When you’re finished, the four Bayware processors should be running on VMs as shown in the table below.

Table 2 Mapping Bayware Processors to Virtual Machines
VM Processor Login Username
aws-p1 proc-1
aws-p2 proc-2
azr-p1 proc-3
gcp-p1 proc-4

You can do this by logging out of aws-p1 in the terminal window you used above by exiting from root and then exiting from the VM altogether by typing

]# exit
]$ exit

which should get you back to your CCC VM and its prompt

[centos@aws-bastion-382fd7 ~]$

For the quick studies among you, the essential commands have been reproduced in the CHEAT SHEET - PROC INSTALL with a few hints about what goes where. If you’re comfortable with the whys of all this typing, the cheat sheet simply saves a little scrolling. Otherwise, feel free to go back through each of the steps in detail.

CHEAT SHEET - PROC INSTALL

]$ ssh centos@aws-p2                     hint: [ aws-p2, azr-p1, gcp-p1 ]
]$ sudo su -
]# yum install ib_engine openvswitch -y
]# cd /opt/ib_engine/bin
]# ./ib_configure -i                     hint: [ proc-2, proc-3, proc-4 ]
]# systemctl start ib_engine
]# systemctl enable ib_engine
]# exit
]$ exit

Once all four engines are installed, return to the orchestrator Topology page and Resources page to ensure everything went smoothly. You should see that the orchestrator has recognized four processors as shown in Fig. 23.

../_images/four_processors.png

Fig. 23 Four Processors on the Orchestrator Topology Page

After that, you’re ready to move on to creating a full mesh between your processor nodes.

Summary

In this chapter you installed Bayware processors and Open vSwitch on four VMs in your infrastructure. You used the orchestrator Topology and Resources buttons extensively to monitor the installation processor. You finally created a full-mesh between all four processor nodes with a few simple clicks on the orchestrator.

Next up: install your first application, Getaway App…