Page tree

 

 

 

 

 

 

 

 

Isolated vRouter Setup and Testing with Trex

 

 

Test Methodology

 

2.4.2019 (1.1)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Version

Date

Author(s)

Comment

1.0

8.20.2018

Joseph Gasparakis joseph.gasparakis@intel.com

Subarna Kar subarna.kar@intel.com

Savannah Loberger savannah.loberger@intel.com

Wang, Yipeng yipeng1.wang@intel.com

 

Initial Version

1.1

2.4.2019

Savannah Loberger savannah.loberger@intel.com

Wang, Yipeng yipeng1.wang@intel.com

Added updates with commit IDs and information for DPDK setup required before running vRouter. Update to using vRouter 5.0 version.

1.2

7.2.2019

Matthew Davis Matthew.Davis.2@team.telstra.com

Typo fixes

Additional commands (wget, tar etc)

sandeshy.hh workaround added

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Overview

Goals

  • This setup was originally created in order to find the limitations/bottlenecks of vRouter version 4.0 to then implement optimizations of the virtual router (vRouter) that is used in Tungsten Fabric (tungstenfabric.io).
  • Isolated vRouter is the vRouter running outside of the usual Contrail environment. There is no agent or other software controlling the flows or paths. Programming of the datapath of the vRouter is done by the user through vrouter utilities such as vtest tool – the vrouter unit test framework, vif – vrouter interface tool, etc.
  • The eventual goal is to run isolated vRouter from the master branch. That would provide a good opportunity for posting patches to the community and continue to improve Tungsten Fabric.

 

 

 

 

Setup Information

  • Ubuntu 16.04 LTS
  • X710-DA4 NICs
    • 2 ports connected on each machine

Installation

Machine 1 (vRouter and Qemu)

Install vRouter

 

  1. As root [DM1] , Download contrail vrouter the code :
  2.        # cd /root

# git clone

  1. Create tools folder for build and sandesh :

# mkdir tools

# cd tools

# git clone

  1. Clone the sandesh in the tools folder as well:

# git clone

  1. Outside of tools directory, create third_party folder for dpdk:

# cd ..

# mkdir third_party

# cd third_party

# git clone https://github.com/Juniper/contrail-dpdk.git

  1. C reate src folder and clone contrail- common :

# cd /root

# mkdir src

# cd src

# git clone https://github.com/Juniper/contrail-common/

# cd contrail-common

# git checkout sandesh_compile

  1. Change the names of the projects: [DM2]

# mv contrail-vrouter vrouter

# mv contrail-build build

# mv contrail-sandesh sandesh

# mv contrail-dpdk dpdk   git clone https://github.com/Junip er/contrail-vrouter.git vrouter

#   git clone https://github.com/Juniper/contrail-build.git tools/build

# git clone https://github.com/Juniper/contrail- sandesh.git tools/sandesh

# git clone https://github.com/Juniper/contrail-dpdk.git third_party/dpdk

# git clone https://github.com/Juniper/contrail-common/ src/contrail-common

  1. Copy the sc onstruct file to the root folder: [DM3]

# cp tools/build/SConstruct ./

 

 

 

 

Now, the directory tree should look like below:

 

root

|

---vrouter (this is the contrail-vrouter)

|

---tools

|          |

|          ----build (this is the contrail-build)

|          |

|          ----sandesh (this is should be the contrail-sandesh)

|     

---third_party

|          |

|          ----dpdk (this should be the contrail-dpdk)

|

---src

|          |

|          ----contrail-common (this is another library needed for sandesh to work)

|

---SConstruct (this should be the file you copy from contrail-build)

 

  1.        Enter all the folders and checkout the following commit-ids. This is ensure that all the project is on the same page and to work together successfully.

# cd <folder>

# git checkout <commit-id>

Folder

Commit-id

vrouter

bdf961e447ecada259548905e1582a8696878443

third_party/dpdk

c5841c5284bca2f6f1afe077131489674324db1c

tools/sandesh

b5d5c1ee1117f59d8f00de620c8b9db236f6cb1e

tools/build

a99cefda8c8b22174347e276ad97b85155086874

src/contrail-common

c93ef4b32cb64faac8267e2b7cd58c5c1ecb4f87

 

  1. Copy the SC onstruct file to the root folder: [DM4]

# cd /root

# cp tools/build/SConstruct ./

 

 

  1.        To compile you may need the following packages (you may need to install each package individually as we have found that the package manager may skip some):

# apt install -y libboost-all-dev libnl-genl-3-dev libxml2-dev liburcu-dev byacc flex libpcap-dev scons python python-pip pkg-config   zlib1g-dev   libglib2.0-dev libfdt-dev libpixman-1-dev cloud-image-utils bison   binfmt-support

 

For the setup, the following packages were also needed

# wget http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/libu/liburcu/liburcu2_0.8.5-1ubuntu1~cloud0_amd64.deb

# wget http://ubuntu-cloud.archive.canonical.com/ubuntu/pool/main/libu/liburcu/liburcu-dev_0.8.5-1ubuntu1~cloud0_amd64.deb

The package name may vary. The build system should warn you if you lack some packages.

To install these packages run

# dpkg –i liburcu-dev_0.8.5-1ubuntu1~cloud0_amd64.deb

# dpkg –i liburcu2_0.8.5-1ubuntu1~cloud0_amd64.deb

 

You may also have to also install the following .deb files (wget then dpkg -i) :

https://master.dl.sourceforge.net/project/libipfix/RELEASES/libipfix-dev_0.8.1-1ubuntu1_amd64.deb

http://downloads.datastax.com/cpp-driver/ubuntu/16.04/cassandra/v2.11.0/cassandra-cpp-driver_2.11.0-1_amd64.deb

http://downloads.datastax.com/cpp-driver/ubuntu/16.04/cassandra/v2.11.0/cassandra-cpp-driver-dev_2.11.0-1_amd64.deb

http://downloads.datastax.com/cpp-driver/ubuntu/16.04/cassandra/v2.11.0/cassandra-cpp-driver-dbg_2.11.0-1_amd64.deb

  1. Compile vrouter, if no option is selected the default build is debug, however the production has better performance: (-jn may not work, if error, remove the -j option)
  2. # cd /root

# scons vrouter -j2  --opt=production/debug/...

 

If you get an error about sandeshy.hh , try running:

# grep -r sandeshy\.hh -l | xargs sed -i 's/sandeshy\.hh/sandeshy\.h/g'

 

  1. After a successful build the DPDK-Vrouter binary is located at:

/root/build/debug/vrouter/dpdk/contrail-vrouter-dpdk

 

Or

/root/build/ production /vrouter/dpdk/contrail-vrouter-dpdk

 

 

Install Qemu

  1.        In root directory, download qemu source code and install (we used qemu- 2.5.0 2.11.1 in our setup)
  2. Download the zip from this site: https://github.com/qemu/qemu/tree/stable-2.5
  3. # wget https://github.com/qemu/qemu/archive/v 2.11.1 .tar.gz

# tar -xzf v 2.11.1 .tar.gz

tar -xJf qemu-2.5.0.tar.xz

# cd qemu- 2.5.0 2.11.1

# ./configure

# make

  1. The build for qemu can take a couple hours so better use the –j option of make (see your distro’s man pages), depending the number of cores in your system

 

Machine 2 (Trex and DPDK)

Install DPDK

  1. We used version 2.0.0 18.05.1 [DM5] , you can find all dpdk source code from www.dpdk.org
  2.        $ wget https://fast.dpdk.org/rel/dpdk- 18.05.1 .tar.xz

$ # tar xf dpdk- 2.0.0 18.05.1 .tar.xz

$ # cd dpdk- 2.0.0 18.05.1

  1. B elow are the list of commands that we ran to setup DPDK for Trex

First b uild the DPDK environment (x86_64-native-linuxapp-gcc).

$ sudo apt install libnuma-dev

# $ make config T=x86_64-native-linuxapp-gcc

$ # make # use –j

 

Become superuser (if not already)

# sudo su

Second i I nsert the igb_uio module

# #   sudo modprobe uio

# #   sudo insmod ./build/ kmod/igb_uio.ko

Setup hugepages and enter the amount of hugepages you want (64 is a good number to start with)

# #     mkdir -p /mnt/huge

# #     mount -t hugetlbfs nodev /mnt/huge

 

# echo 64 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages

# # repeat for other nodes if you have them

# echo 64 > /sys/devices/system/node/node1 /hugepages/hugepages-2048kB/nr_hugepages

 

Install Trex

  • The setup of DPDK, explained above, must be done before running Trex .
  • Install a dependency
  • # sudo apt-get install python3-distutils
  • We used Trex version 2. 2.53 0 9

# mkdir trex

# cd trex

# wget --no-cache http://trex-tgn.cisco.com/trex/ ${TREX_WEB_URL}/ release/v2.09.tar.gz

# tar –xzf v 2.53 .tar.gz

 


Running Isolated vRouter

Overview

  • Diagram of our configuration of vRouter:

 

 

 

 

 

 

 

 

 

 

VIF 0/0 is the bottom left [DM6] one. VIF 0/1 is the bottom right.

Here are the MAC addresses for this setup.

Interface

Description

MAC

Different for you

Eth0

Inside the VM

02:e9:ee:49:c3:bc

No. Use this MAC value

Eth1

Inside the VM

02:e9:ee:49:c3:bd

No. Use this MAC value

VIF 0/0

Physical NIC

3c:fd:fe:9c:5b:19

Yes, lookup with `ip a`

VIF 0/1

Physical NIC

3c:fd:fe:9c:5b:18

Yes, lookup with `ip a`

VIF 0/2

vRouter interface, looking up into VM

00:00:5e:00:01:00

No. Use this MAC value

VIF 0/3

vRouter interface, looking up into VM

00:00:5e:00:01: ?? [DM7]

No. Use this MAC value

 

 

Start vRouter

Setup DPDK

Return to the first machine. Before running vRouter, setup hugepages and insert the igb_uio modules. The following commands have to be ran before running vRouter after a reboot of the machine.

  1.        Setup hugepages [DM8] .

# mkdir /dev/hugepages1G [DM9] /mnt/huge -p

# mount -t hugetlbfs -o pagesize=1G none /mnt/huge

  1. For Ubuntu, the vRouter references the hugepage information from the “/proc/mount” file. Verify that the entry was made in the file.

# grep pages i z e=1 /mnt/huge G /proc/mount s

  1. Insert igb_uio module.

# modprobe uio

# cd /root

# insmod build/production/vrouter/dpdk/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko

  1. Choose the 2 physical interfaces you want to use. Write down their MAC address, PCI address and linux interface name.
  2. # . / third_party/dpdk/usertools/dpdk-devbind.py -- status # for pci and linux interface name
  3. # ip a # to get mac
  4. Bind devices to igb_uio. Replace the <interfaces> with your devices names. To display all available devices with their names, run the script “ . /third_party/dpdk/usertools/dpdk-devbind.py --status ”.

# ifconfig <eth device> down

# ./third_party/dpdk/usertools/dpdk-devbind.py --bind=igb_uio <pcie address of nic>

# ./third_party/dpdk/usertools/dpdk-devbind.py --bind=igb_uio <pcie address of nic>

 

Run vRouter

  1. If you are restarting vrouter
  2.        Command to start vRouter:

# cd /root

# taskset 0x3f [DM10] ./build/production/vrouter/dpdk/contrail-vrouter-dpdk --no-daemon --socket-mem 1024,1024

(or perhaps debug instead of production)

 

Setup vRouter

Setup config file

sudo mkdir –p /etc/contrail

sudo vim /etc/contrail/contrail-vrouter-agent.conf

Add the following to the file

[DEFAULT]
platform=
dpdk

 

Add Interfaces to vRouter

You can add physical and virtual devices to the vRouter. You can run vRouter with only one interface of each. However, for this setup you need:

  • 2 physical interfaces
  • 2 virtual interfaces

 

  1. Add 2 physical interfaces to vRouter . The MAC addresses should match the ones for these interfaces which you wrote down earlier :
  2. # ./build/production/vrouter/utils/vif --add 0 --mac 3c:fd:fe:9c:5b:19 --vrf 0 --type physical --pmd

# ./build/production/vrouter/utils/vif --add 1 --mac 3c:fd:fe:9c:5b:18 --vrf 0 --type physical --pmd

  1.         

# ./build /debug/ vrouter/utils/ vif [DM11] --add 0 --mac 3c:fd:fe:9c :5b:19 --vrf 0 --type physical -- pmd

# ./build /debug/ vrouter/utils/vif --add 1 --mac 3c:fd:fe:9c:5b:18 --vrf 0 --type physical -- pmd

 

  1. Add 2 virtual interfaces to vRouter: 1

# ./build/production/vrouter/utils/vif --add 2 --mac 00:00:5e:00:01:00 --vrf 0 --type virtual --transport pmd --pmd --policy

# ./ build /debug/ vrouter/utils/vif --add 2 --mac 00:00:5e:00:01:00 --vrf 0 --type virtual --transport pmd --pmd -- policy

# ./build /debug/ vrouter/utils/vif --add 3 --mac 00:00:5e:00:01:0 0 [DM12] --vrf 0 --type virtual --transport pmd --pmd -- policy

# ./build/production/vrouter/utils/vif --add 3 --mac 00:00:5e:00:01:01 --vrf 0 --type virtual --transport pmd --pmd --policy

 

( “--policy” sets the policy flag to on, so that packets from the VM will go through the flow lookup. )

 

Check to see what has been set up.

# ./build /production/vrouter/utils/vif -- list

It should look like this:

Vrouter Interface Table

 

Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror

       Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2

       D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged

       Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored

       Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled

       Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, Ig=Igmp Trap Enabled

 

vif0/0      PMD: 0 (Speed 10000, Duplex 1)

            Type:Physical HWaddr: f8:f2:1e:05:ff:a4 [DM13] IPaddr:0.0.0.0

            Vrf:0 Mcast Vrf:65535 Flags:TcL3L2Dpdk QOS:0 Ref:15

            RX device packets:103  bytes:33470 errors:0

            RX port   packets:101 errors:0

            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

            RX packets:101  bytes:33330 errors:101

            TX packets:0  bytes:0 errors:0

            Drops:101

 

vif0/1      PMD: 1 (Speed 10000, Duplex 1)

            Type:Physical HWaddr: f8:f2:1e:05:ff:a6 [DM14] IPaddr:0.0.0.0

            Vrf:0 Mcast Vrf:65535 Flags:TcL3L2Dpdk QOS:0 Ref:15

            RX device packets:102  bytes:33140 errors:0

            RX port   packets:90 errors:0

            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

            RX packets:90  bytes:29700 errors:90

            TX packets:0  bytes:0 errors:0

            Drops:90

 

vif0/2      PMD: 2

            Type:Virtual HWaddr:00:00:5e:00:01: 00 [DM15] IPaddr:0.0.0.0

            Vrf:0 Mcast Vrf:65535 Flags:PL3L2Dpdk QOS:0 Ref:15

            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

            RX packets:0  bytes:0 errors:0

            TX packets:0  bytes:0 errors:0

 

vif0/3      PMD: 3

            Type:Virtual HWaddr:00:00:5e:00:01: 00 [DM16] IPaddr:0.0.0.0

            Vrf:0 Mcast Vrf:65535 Flags:PL3L2Dpdk QOS:0 Ref:15

            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

            RX packets:0  bytes:0 errors:0

            TX packets:0  bytes:0 errors:0

            Drops:0

 

 

 

 

Using vTest

  • vTest is the vrouter test tool that allows you to control the vRouter through xml file. Using vTest we are able to run vRouter without the agent present.

Example xml File

For our setup we created 2 nexthops, 1 route lookup, 1 flow request, and added an MPLS label to the VM.

Some of the MAC addresses in the following file must be changed. Read the XML comments.

 

Contents of mpls_add.xml: (When trying to comprehend this file, please note that the endianness is not always what you expect.)

 

< ? [DM17] xml version=" 1 2 .0"?>

<test>

    <test_name>Adds nexthop, </test_name>

    <message>

        <vr_nexthop_req>

<!--This nexthop is to receive the tunneled packet and is obtained by route lookup of outer dst IP -->

            <h_op>Add</h_op>

            <nhr_type>1</nhr_type> <!-- RCV type -->

            <nhr_id>21</nhr_id> <!-- you can choose any number, other than 0 which is the default nexthop to drop the pkt -->

            <nhr_family>2</nhr_family> <!-- type AF_INET -->

            <nhr_encap_oif_id>1</nhr_encap_oif_id>

<!-- In case of a non tunneled packet, it will be sent out of the other NIC -->

            <nhr_vrf>0</nhr_vrf>

            <nhr_flags>1</nhr_flags>

        </vr_nexthop_req>

        <return>0</return>

    </message>

    <message>

        <vr_nexthop_req>

<!--This nexthop is for inner packet, obtained from mpls tag and points to the corresponding VM the pkt should be sent to -->

            <h_op>Add</h_op>

            <nhr_type>2</nhr_type> <!-- encap type -->

            <nhr_id> 2   [DM18] 2 0</nhr_id>

            <nhr_family>7</nhr_family> <!-- type AF_BRIDGE -->

            <nhr_encap_oif_id>2</nhr_encap_oif_id>

<!--pointing to the VM which is interface 2 in our setup -->       

<nhr_encap>02:e9:ee:49:c3:bc:00:00:5e:00:01:00:08:00</nhr_encap> <!--MAC address of interface in VM and MAC address of virtual interface in vrouter, then 08:00 -->

            <nhr_vrf>0</nhr_vrf>

            <nhr_flags>7</nhr_flags>

        </vr_nexthop_req>

        <return>0</return>

   </message>

   <message>

        <vr_nexthop_req>

<!--This nh is for pkts coming from 3rd vif from VM to go out of the 1st physical vif to go to trex -->

            <h_op>Add</h_op>

            <nhr_type>3</nhr_type> <!-- Tunnel type -->

            <nhr_id>22</nhr_id>

            <nhr_encap_oif_id>1</nhr_encap_oif_id> <!--go out of NIC 1 -->         

                                                   <nhr_encap>68:05:ca:04:4a:91:3c:fd:fe:9c:5b:18:08:00</nhr_encap>

            <nhr_tun_sip>33686019</nhr_tun_sip> <!-- 2.2.2.3 -->

            <nhr_tun_dip>33686021</nhr_tun_dip> <!-- 2.2.2.5 -->

            <nhr_tun_sport>0</nhr_tun_sport>

            <nhr_tun_dport>0</nhr_tun_dport>

            <nhr_vrf>0</nhr_vrf>

            <nhr_flags>65</nhr_flags> <!--NH_FLAG_TUNNEL_UDP_MPLS, NH_FLAG_VALID -->

        </vr_nexthop_req>

        <return>0</return>

   </message>

   <message>

        <vr_route_req>

<!--This route lookup is to connect pkts with outer dst IP 2.2.2.2 to RCV NH of id 21. In full setup, the IP shd be same as vhost0's IP -->

            <h_op>Add</h_op>

            <rtr_family>2</rtr_family>

            <rtr_nh_id>21</rtr_nh_id> <!--ID of the RCV nexthop defined earlier -->

            <rtr_prefix>2.2.2.2</rtr_prefix> <!--this can be any IP address you mention in outer dst IP in TREX -->

            <rtr_prefix_len>8</rtr_prefix_len>

            <rtr_vrf_id>0</rtr_vrf_id>

        </vr_route_req>

        <return>0</return>

  </message>

  <message>

        <vr_route_req>

<!--This is to create bridge entry for pkts coming out of the VM to reach trex port through NIC -->

<!--Naturally it shd search for TREx dst MAC/RX VM's MAC in bridge, in this case, l2fwd, no mac updating, so pkt has same dst MAC as being sent by TREX, as set in udp_2pkt_simple.py -->

            <h_op>Add</h_op>

            <rtr_family>7</rtr_family>

            <rtr_nh_id>22</rtr_nh_id>

            <rtr_label_flags>3</rtr_label_flags> <!-- BR_BE_VALID_FLAG -->

            <rtr_mac>02:e9:ee:49:c3:bc</rtr_mac>

            <rtr_label>5</rtr_label> <!-- does this become MPLS label? -->

            <rtr_vrf_id>0</rtr_vrf_id>

        </vr_route_req>

        <return>0</return>

  </message>

  <message>

        <vr_flow_req>

<!--This flow is for the inner packet lookup after mpls header decapsulation, it looks at the 5 tuples -->

           <fr_op>flow_set</fr_op>

           <fr_flow_sip_l>50397441</fr_flow_sip_l> <!--decimal version of src IP 3.1.1.1 -->

           <fr_flow_sip_u>0</fr_flow_sip_u>

           <fr_flow_dip_l>67174657</fr_flow_dip_l> <! -- decimal version 4.1.1.1 -->

           <fr_flow_dip_u>0</fr_flow_dip_u>

           <fr_family>2</fr_family>

           <fr_index>-1</fr_index>

           <fr_flags>1</fr_flags>

           <fr_flow_proto>17</fr_flow_proto> <!-- UDP -->

           <fr_flow_sport>60185</fr_flow_sport><!-- this port is needed for it to know this is MPLS pkt, this value is hardcoded in vrouter code -->

           <fr_flow_nh_id>0</fr_flow_nh_id>

           <fr_action>2</fr_action> <!-- FORWARD -->

           <fr_flow_dport>60185</fr_flow_dport>

        </vr_flow_req>

        <return>0</return>

  </message>

  <message>

     <vr_mpls_req>

<!--Adding mpls tag, which should be done by agent when VM comes up -->

         <h_op>Add</h_op>

         <mr_label>4</mr_label> <!--Randomly chosen, shd be mentioned in TREX pkt -->

         <mr_nhid>20</mr_nhid> <!--points to the ENCAP nexthop defined earlier -->

     </vr_mpls_req>

     <return>0</return>

  </message>

  <message>

        <vr_mpls_req>

            <h_op>Add</h_op>

            <mr_label>5</mr_label>

            <mr_nhid>21</mr_nhid>

        </vr_mpls_req>

        <return>0</return>

  </message>

</test>

 

Run vTest

  1.        Add mpls tags, next hops, and routes to vRouter, create the new file and store mpls_add.xml at /root :

# cd /root

# ./ build/production/vrouter/utils/vtest/vtest mpls_add.xml

You can inspect the hops and flows with:

# ./build/production/vrouter/utils/nh --list

#./build/production/vrouter/utils/flow -l

Start and Setup VM

Prep VM

Download and prep images (Ubuntu 16) [DM19]

IMG_DATA="xenial-server-cloudimg-amd64-disk1.img"

IMG_UEFI="xenial-server-cloudimg-amd64-uefi1.img"

URL_PREFIX= https://cloud-images.ubuntu.com/xenial/current/

USER_DATA= " user-data.img "

wget ${URL_PREFIX}${IMG_DATA} --show-progress

wget ${URL_PREFIX}${IMG_UEFI} --show-progress

/root/qemu- 2.11.1 / qemu-img resize "${IMG_DATA}" +128G

 

Prep user data for cloud init . Edit file user-data.txt

#cloud-config

password: myPass

chpasswd: { expire: False }

ssh_pwauth: True

Create user data image

cloud-localds "$ USER_DATA " user-data.txt

 

Start VM

  1.        Qemu [DM20] command to start VM (using Qemu built from source),

“path” is to the socket that was created after adding virtual interfaces to the

isolated vrouter:

# taskset 0xf00 ./qemu- 2.11.1 /x86_64-softmmu/qemu-system-x86_64 \

  -m 3G \

  -drive "file=${IMG_UEFI},format=qcow2" \

  -drive "file=${IMG_DATA},format=qcow2" \

  -drive "file=${USER_DATA},format=raw" \

  -cpu host \

  -object memory-backend-file,id=mem,size=3072M,mem-path=/mnt/huge,share=on \

  -numa node,memdev=mem \

  -mem-prealloc \

  -mem-path /mnt/huge,prealloc=on,share=on \

  -smp cores=4,threads=1,sockets=1 \

  --enable-kvm \

  -chardev socket,id=chr0,path=/var/run/vrouter/uvh_vif_2 \

  -netdev type=vhost-user,id=net0,chardev=chr0 \

  -device virtio-net-pci,netdev=net0,mac=02:e9:ee:49:c3:bc \

  -chardev socket,id=chr1,path=/var/run/vrouter/uvh_vif_3 \

  -netdev type=vhost-user,id=net1,chardev=chr1 \

  -device virtio-net-pci,netdev=net1,mac=02:e9:ee:49:c3:bd  \

  -nographic \

  -device virtio-net-pci,netdev=net2 \

  -netdev user,id=net2 \

; taskset 0xf00 ./x86_64-softmmu/qemu-system-x86_64 -m 3G -hda /root/ VM [DM21]   [DM22] [DM23] -vnc 10.166.19.10:2 -cpu host -object memory-backend-file,id=mem,size=3072M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -mem-path /mnt/huge,prealloc=on,share=on -smp cores=4,threads=1,sockets=1 --enable-kvm -chardev socket,id=chr0,path=/var/run/vrouter/uvh_vif_2 -netdev type=vhost-user,id=net0,chardev=chr0 -device virtio-net-pci,netdev=net0,mac=02:e9:ee:49:c3:bc -chardev socket,id=chr1,path=/var/run/vrouter/uvh_vif_3 -netdev type=vhost-user,id=net1,chardev=chr1 -device virtio-net-pci,netdev=net1,mac=02:e9:ee:49:c3:bd

 

If you’re doing this over a ssh connection (without a graphic window) you may need to remove the vnc argument and add

-serial mon:stdio

 

The login is user:ubuntu password: myPass

To quit qemu, use Ctrl-A X

 

You will be able to run `apt update` from inside the VM, but not ping, because the internet interface blocks icmp (and inbound ssh) .

 

Configure VM

  1. Open VM and stop network manager
  2.        # service network-manager stop
  3.  
  4. Grow the root partition because the original cloud image has no room
  5. # apt-get install cloud-initramfs-growroot
  6. # reboot

# service network-manager stop

 

 

 

 

Install L2FWD in VM

  1.        Inside the same VM, install and compile DPDK to use L2FWD to forward the incoming packets. Below are the list of commands that we ran to setup DPDK for L2FWD

# wget [DM24]   https://fast.dpdk.org/rel/dpdk- 18.08 [DM25] .tar.xz

# tar xf dpdk-18.08.tar.xz

# cd dpdk -18.08

# export RTE_SDK=/home/$USER/dpdk-18.08

# export RTE_TARGET= x86_64-native-linuxapp-gcc

First build the DPDK environment (x86_64-native-linuxapp-gcc).

# apt install build-essential libnuma-dev python pkg-config

# make config T= x86_64-native-linuxapp-gcc $RTE_TARGET

# make

Second insert the igb_uio module

# sudo modprobe uio

# sudo insmod build/ kmod/igb_uio.ko

Setup hugepages and enter the amount of hugepages you want (64 is a good number to start with)

# mkdir -p /mnt/huge

# mount -t hugetlbfs nodev /mnt/huge

# echo 64 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages

  1. Connect the two eth interfaces to DPDK

# ./usertools/dpdk-devbind.py -- bind=igb_uio eth0 [DM26]

# ./usertools/dpdk-devbind.py -- bind=igb_uio eth1

Build L2FWD (RTE_SDK is the path to the DPDK folder)  

  1.  

# export RTE_SDK=/home/ user /dpdk

# export RTE_TARGET= x86_64-native-linuxapp-gcc # make -C examples RTE_SDK=$(pwd) RTE_TARGET=build O=$(pwd)/build/examples

 

# cd dpdk/examples/l2fwd

# make [DM27]

 

Setup Trex to send MPLS/UDP Packets

DPDK Nic Setup

Write down the interface name, MAC and PCI addresses of your interfaces

  1.        Bind your nic to igb_uio, replace port numbers with your own

# cd trex/v 2. 2.53 09

# ./dpdk_nic_bind.py -- status

# ip a

  1. Bind your nic to igb_uio, replace port numbers with your own

 

Or, you can use ./dpdk_nic_bind.py –t to display a table of information about your NICs

Using the displayed information, bind a minimum of 2 ports to igb_uio driver

# ./dpdk_nic_bind.py -b igb_uio 08:00.0 08:00.1

 

 

 

 

Trex configuration

  1. Sample Trex Configuration File, stored at /etc/trex_cfg.yaml, add in your devices MAC addresses and port information
  2.        If that file does not exist, run through ` sudo ./dpdk_setup_ports.py -i `. Say yes to MAC based, and use MAC of DUT. When prompted for the MAC, use what you originally saw with `ip a` on the other machine.

Example C c ontents of trex_cfg.yaml :

### Config file generated by dpdk_setup_ports.py ###

 

- version: 2

  interfaces: ['63:00.0', '63:00.1']

  port_info:

      - dest_mac: f8:f2:1e:05:ff:a4

        src_mac:  3c:fd:fe:3b:45:00

      - dest_mac: f8:f2:1e:05:ff:a6

        src_mac:  3c:fd:fe:3b:45:02

 

  platform:

      master_thread_id: 0

      latency_thread_id: 1

      dual_if:

        - socket: 0

          threads: [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62] - port_limit      : 2         # this option can limit the number of port of the platform

  version         : 2

  interfaces    : ["08:00.0","08:00.1"] # the interfaces using ./dpdk_setup_ports.py -s

  port_info       :  # set the mac addr

          - dest_mac        :   [0xFF,0xFF,0xFF,0xFF,0xFF,0x18] # taken from router

            src_mac         :   [0xFF,0xFF,0xFF,0xFF,0xFF,0x91]  # taken from ifconfig

          - dest_mac        :   [0xFF,0xFD,0xFF,0xFF,0xFF,0x19]  # taken from router

            src_mac         :   [0xFF,0xFF,0xFF,0xFF,0xFF,0x90]  #  taken from ifconfig

 

The src_mac values should match the MAC addresses of the interfaces on the generator NICs.

The dest_mac values should match the MAC addresses of the physical NICs on the unit under test.

 

 

 


Sample Packet File for Trex

  • Based on RFC7510 for the vRouter to accept the MPLSoUDP packets the UDP destination and source ports must be set to 6635
  • Also, there must be time-to-live (TTL) set to a high enough value (ex. we used 63) otherwise the packet is dropped

Contents Save the following as of mpls_udp_1pkt_simple.py:

from trex_stl_lib.api import *

from scapy.contrib.mpls import *

 

class STLS1(object):

 

    def create_stream (self):

        return STLStream(

            packet =

                    STLPktBuilder(

                        pkt = Ether()/IP(src="2.2.2.1",dst="2.2.2.2")/UDP(dport=6635,sport=6635)/MPLS(label=0x04,ttl=63)/Ether(dst="02:e9:ee:49:c3:bc")/IP(src="1.1.1.3",dst="1.1.1.4")/UDP( dport=6635,sport=6635 [DM28] )/(10*'x')

                    ),

             mode = STLTXCont())

 

    def get_streams (self, direction = 0, **kwargs):

        # create 1 stream

        return [ self.create_stream() ]

 

# dynamic load - used for trex console or simulator

def register():

    return STLS1()


Running Traffic

Forwarding Packet(s) in VM

Running L2FWD

  1.        Forward the incoming traffic inside the VM on eth0 to eth1

# ./examples/l2fwd/build/app/l2fwd -l 0-3 -n 4 -- -p 0x03 --no-mac-updating

 

Running Trex

Trex Stateless Mode

  1.        In one terminal of Machine 2 start trex in interactive mode (at this point there should be available hugepages for trex to run that was setup in a previous step with DPDK)

# cd trex/v 2. 2.53 09

# ./t-rex-64 - i i -c 4

 

  1. In another terminal start the console for trex and start sending traffic, below configures the receiving port   ( port 1 port 1 ) to promiscuous mode to receive the incoming packets and sends MPLSoUDP packets from port 0 for 5 60 seconds

# cd trex/v 2. 2.53 09

# ./trex-console

trex> stats

trex> portattr --port 1 --prom on

port_attr --port 1 --prom

trex> start -p 0 -d 60 5   -f stl/mpls_udp_1pkt_simple.py -m 100%

trex> stats

trex> quit

 

 

 

 

 

 

 

 

 

 

References

 

  • For more information about setting up and running DPDK, refer here:

https://doc.dpdk.org/guides/index.html

  • For more information about trex, refer here:

https://trex-tgn.cisco.com/trex/doc/trex_manual.html#_download_and_installation

  • For more information about running the trex console refer here:

https://trex-tgn.cisco.com/trex/doc/trex_console.html

  • For more information about the vRouter tool, vif:

https://www.juniper.net/documentation/en_US/contrail2.0/topics/task/configuration/vrouter-cli-utilities-vnc.html   

  • For more information about the vRouter tool, vtest:

https://github.com/Juniper/contrail-vrouter/tree/master/utils/vtest  

 

 


[DM1] Is this necessary?

[DM2] We can change the git clone commands to do this, so we don’t need a separate step

[DM3] Shouldn’t this happen after checking out a specific commit?

[DM4] Shouldn’t this happen after checking out a specific commit?

[DM5] I was not able to compile this. This version is 5 years old. GCC has changed behavior since then.

I’m trying 18.05.1. It does compile

[DM6] Is this correct?

[DM7] What is this supposed to be?

[DM8] I found that this didn’t work. I had to update the grub command line

default_hugepagesz=1G hugepagesz=1G hugepages=12 hugepagesz=2M hugepages=1024

[DM9] Why is this different to the path in the next command?

[DM10] Where did this come from?

[DM11] Fails for me with:

> Error registering NetLink client: No such file or directory (2)

[DM12] Should this be 01?

[DM13] TODO: change mac address to match other commands

[DM14] TODO: change MAC address to match other commands

[DM15] TODO: check whether this should be the same as 0/3

[DM16] TODO: check whether this should be the same as 0/2

[DM17] Is this a typo? Should I add a < or replace the ? with a <

[DM18] Is this a typo? I don’t have much faith in whatever parses this document.

[DM19] Is this the right one?

[DM20] I got error:

>Could not access KVM kernel module: No such file or directory

> failed to initialize KVM: No such file or directory

Trying

sudo apt install $( apt-cache search kvm | grep $(uname -r) | awk '{ print $1 }' | head -n 1)

[DM21] Ubuntu 16? How do we init it? Do we need cloud init?

[DM22] What is th

[DM23] what is this?

[DM24] How are we supposed to run wget if the VM only has interfaces that connect to the vRouter, not the internet?

I added another interface to the qemu command

[DM25] Why is this a different version to the bare metal?

[DM26] For me was ens3 and ens4

[DM27] The official instructions say:

make -C examples RTE_SDK=$(pwd) RTE_TARGET=build O=$(pwd)/build/examples


(Then you don’t need that ln -s command)

[DM28] Why are these port 6635? The outer ones should be that, but not the inner ones.