CDP / LLDP

CDP (Cisco Discovery Protocol) / LLDP (Link Layer Discovery Protocol) – protocols, allowing network devices to announce they presence to other devices.

CDP – Cisco proprietary protocol, therefore it is working only with Cisco devices. LLDP – open protocol.

Usage examples

show cdp neighbors
show lldp neighbors

Best practice / configuration examples

CDP enabled by default on all cisco devices. Best practice is to disable CDP on customer-facing interfaces (and any other interfaces, facing to other company) and have CDP enable for internal-facing connections:

interface ehternet 1/10
  decription There is where our customer connected
  no cdp enabe

In Non-cisco networks (or mixed networks) best practice is to use LLDP. LLDP disabled by default on Cisco devices, so it is advised to enable it in mixed network.

HSRP / FHRP

HSRP (Hot Standby Routing Protocol) – Cisco proprietary protocol used for first hop redundancy.

FHRP (First Hop Redundancy Protocol) – open protocol for same goal: first hop redundancy.

2 switches required for implementation. HSRP using virtual MAC and virtual IP to process packets. (And yes, in our case it is working on VLAN interfaces).

Sample configuration

SW1 configuration:

interface Vlan10
  description SuperVLAN for HSRP
  no shutdown
  no ip redirects
  ip address 10.10.10.2/24
  hsrp version 2
  hsrp 1
    preempt
    priority 50
    timers msec 250 msec 750
    ip 10.10.10.1

Sw2 configuration:

interface Vlan10
  description SuperVLAN for HSRP
  no shutdown
  no ip redirects
  ip address 10.10.10.3/24
  hsrp version 2
  hsrp 1
    preempt
    priority 50
    timers msec 250 msec 750
    ip 10.10.10.1

What is VRF?

VRF – Virtual Routing and Forwarding. Basically it is virtual router within physical router. With separated configuration and routing tables/databases. To use VRF, interface(or subinterface) should be assigned to VRF.

Sample configuration.

Basic VRF configuration:

vrf context superVRF
  ip name-server 8.8.8.8
  ip name-server 8.8.8.8 use-vrf na

Add interface to VRF:

interface Ethernet 1/10
  vrf member superVRF

Static route for VRF:

vrf context superVRF
  ip route 0.0.0.0/0 10.10.10.10

Configure Dynamic routing for VRF (OSPF as example):

router ospf 1
  vrf superVRF
    router-id 10.10.10.11
    redistribute direct route-map ospf-direct
    log-adjacency-changes detail
    
route-map ospf-direct permit 0
  match ip address prefix-list ANY

What is SVI?

SVI – Switched Virtual Interface. Virtual interface on Cisco device, that connects and route traffic between devices in a VLAN. It allows switch to communicate with other devices in VLAN, such as computers or servers and allow routing between them and external networks (with some extra configuration of switch).

Sample configuration:

interface Vlan10
  description SuperVLAN
  ip address 10.10.10.10/24

Switchport access/trunk mode

Two most useful modes at which ports on switch can work is ‘access’ and ‘trunk’ mode.

Access mode – all incoming traffic tagged with VLAN tag by switch and processed further. Outgoing traffic for this port processed only for associated VLAN.

Trunk mode – allows VLAN/tagged incoming traffic and sort out incoming traffic basing on tags/VLANs. Outgoing traffic for this port processed only for associated VLANs.

In general: trunk mode for inter-switch connection (or connection switch-to-servers-what support trunking), access mode – connection to end devices.

Sample configurations.

Port in access mode:

interface Ethernet1/10
  description To CoolPrinter
  switchport
  switchport access vlan 100
  no shutdown

Port in trunk mode:

interface Ethernet1/11
  description Link to AnotherCoolSwitch
  switchport
  switchport mode trunk
  switchport trunk allowed vlan 100,200-300
  no shutdown

Howto check port mode (is it in access or trunk mode):

show interface ethernet 1/54 | include Port

How to deploy Oxidized server in Docker container

Oxidized is a network device configuration backup tool. Wery useful, when you have hundreeds of switches/routers and want to keep configuration of mentioned devices and configuration version history.

Open ports 80/tcp and 443/tcp:

#firewall-cmd –add-port=80/tcp

#firewall-cmd –add-port=443/tcp

#firewall-cmd –runtime-to-permanent

#firewall-cmd –reload

Create directory for service:

#mkdir /opt/oxidized

Create shell script for Oxidized and make it executable:

#cd /opt/oxidized
#touch oxi.sh
#echo #!/bin/sh >>oxi.sh
#echo /usr/local/sbin/docker-compose -f /opt/oxidized/docker-compose.yml down >>oxi.sh
#echo /usr/local/sbin/docker-compose -f /opt/oxidized/docker-compose.yml up -d >>oxi.sh

Create compose file:

#cd /opt/oxidized
#touch docker-compose.yml
#vi docker-compose.yml
For reference – example docker-compose configuration file:

version: '3'
services:
  oxi:
    image: oxidized/oxidized:latest
    restart: always
    environment:
      - user.name=Oxidized_user_for_device_config_backups
      - [email protected]
    volumes:
      - /opt/oxidized:/root/.config/oxidized
      - /opt/oxidized/model:/var/lib/gems/2.5.0/gems/oxidized-0.28.0/lib/oxidized/model
    networks:
      - global

  web:
    image: nginx:latest
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /etc/nginx:/etc/nginx
      - /var/log/nginx:/var/log/nginx
    networks:
      - global
    depends_on:
      - oxi

networks:
  global:
    ipam:
      config:
        - subnet: "10.10.10.0/24"

Create Oxidized configuration file:

#cd /opt/oxidized
#touch config
#vi config

For reference: sample Oxidized configuration file:

---
username: username
password: password
model: nxos
resolve_dns: true
interval: 86400
use_syslog: false
debug: true
threads: 30
timeout: 120
retries: 1
prompt: ruby/regexp /^(\r?[\w.@_()-]+[#]\s?)$/
next_adds_job: false
vars:
  remove_secret: true
  auth_methods:
  - password
rest: 0.0.0.0:8888
groups: {}
models: {}
pid: "/var/run/oxidized.pid"
log: "/root/.config/oxidized/oxidized.log"
crash:
  directory: "/root/.config/oxidized/crashes"
  hostnames: false
stats:
  history_size: 10
input:
  default: ssh, telnet
  debug: false
  ssh:
    secure: false
output:
  default: git
  git:
    single_repo: true
    user: oxidized
    email: [email protected]
    repo: /root/.config/oxidized/output/configs.git
hooks:
  push_to_remote:
    type: githubrepo
    events: [post_store]
    remote_repo: http://gitlab.fancydomain.ca/oxidized/oxidized.git
    username: oxidized
    password: password
source:
  default: csv
  csv:
    file: "/root/.config/oxidized/router.db"
    delimiter: !ruby/regexp /:/
    map:
      name: 0
      ip: 1
      model: 2
      login: 3
      password: 4
    vars_map:
      enable: 5
    gpg: false
model_map:
  cisco: nxos
  cisco: ios

Modify NAT Iptables rules at Linux server

Check current configuration. Maybe rule already present?

#iptables -t nat -L -n

Save current configuration to file.

#iptables-save > /etc/sysconfig/some_file_you_want

Edit configuration file.

#vi /etc/sysconfig/some_file_you_want

For Destination NAT add record to PREROUTING section.

Sample: All incoming traffic to IP 10.10.10.10 and port tcp/443 NAT-ed to IP 192.168.0.10 port tcp/10000

-A PREROUTING -d 65.87.230.18/32 -p tcp -m tcp –dport 443 -j DNAT –to-destination 192.168.96.140:10000

For Source NAT add record to POSTROUTING section.

Sample: All outgoing traffic “to world“ and port tcp/25 will be NAT-ed with source address 10.10.10.10

-A POSTROUTING -s 192.168.0.10/32 -o eth0 -p tcp -m tcp --dport 25 -j SNAT --to-source 10.10.10.10

Check you made correct records in correct sections.

Save file.

Apply iptables configuration to server.

iptables-restore < /etc/sysconfig/some_file_you_want

Clear swap at Linux

Swap is used for cases, when system run out of available RAM. If server have no available RAM for process, it will crush/hung. Swap way extremely low, comparable to RAM and add extra load to disks.

When you run a lot of Linux servers it is a great idea to have “swap cleaner” clear up swap space during low-load time (night-time usually). Thi script used for this procedure.

Drop it at /etc/cron.daily to make it run daily.

It will check it server have enough available RAM, before running.

Code:

#!/bin/bash

free_mem=”$(free | grep ‘Mem:’ | awk ‘{print $7}’)”
used_swap=”$(free | grep ‘Swap:’ | awk ‘{print $3}’)”

echo -e “Free memory:\t$free_mem kB ($((free_mem / 1024)) MiB)\nUsed swap:\t$used_swap kB ($((used_swap / 1024)) MiB)”
if [[ $used_swap -eq 0 ]]; then
echo “Congratulations! No swap is in use.”
elif [[ $used_swap -lt $free_mem ]]; then
echo “Freeing swap…”
sudo swapoff -a
sudo swapon -a
else
echo “Not enough free memory. Exiting.”
exit 1
fi

What is MTU, why do we need to have same MTU values on both sides of link and how to troubleshoot issues with MTU.

MTU in networks is Maximum Transmission Unit. And basically, it indicates the maximum size of packet what could be processed by network device without fragmentation. The default MTU is 1500, therefore, all packets what is bigger than 1500 bytes, should be fragmented to successfully transferred by network devices.

Misconfiguration of MTU on different sides of link could lead to some “strange” network issues, as default behavior for switch is to drop packet if it is bigger, than MTU, configured for interface.
Refer to attached picture. It has two switches, configured with default values of 1500 on all interfaces.
All traffic goes successfully in both directions. Packets smaller than 1500 bytes (like ping) go freely and packets bigger than 1500 bytes (like ftp traffic) will be fragmented, but still passing.

Now let’s imagine we have misconfigured MTU at switch1 on port facing switch2. Let’s say it is configured with another default value of 9000 bytes.
In this case ping still go fine in both directions, but when we try to do file transfer (with ftp), it will fail in one direction, while successfully pass in another: We still be able to download anything from “server” to “client”, but unable to upload anything to “server”, as switch1 will try to transmit bigger chunks of data (9000 bytes), and switch2 will drop all of this packets, as they are bigger, than configured MTU on interface, facing switch1.

How to diagnose issue: with standard tools – ping and tracert. Both tools have ability to be configured to send bigger packets (like 2000 bytes).

When using PPPoE or GRE, you also should pay attention to MTU size and make sure it is configured correctly.

How to limit usage of RAM for buff/cache at Linux in 9 easy steps.

First i should mention: Playing mindlessly with this could lead to server instability.

In some cases, some applications “eats” unbelievable amount of memory for buffers/cache, what could lead to different negative outcomes.
For example, if server has 48Gb of RAM and some application uses 40Gb of this RAM for buffer/cache, it might be a good idea to limit RAM usage for this application.
It is up to you to decide if it worth to limit application RAM usage.

So, 9 easy steps:
1. Ensure that cgroups are enabled in your Linux system by checking if the cgroup_enable=memory option is present in the kernel command line. Edit the /etc/default/grub file and update the GRUB_CMDLINE_LINUX parameter if required.

2. Install cgroup-tools: 
#yum install cgroup-tools

3. Create a new cgroup directory to control memory usage: 
#mkdir /sys/fs/cgroup/memory/limited_group

4. Set the memory limit for the cgroup directory. 
#echo “1G” | sudo tee /sys/fs/cgroup/memory/limited_group/memory.limit_in_bytes

5. Edit the /etc/cgconfig.conf file. Adding this will set restriction to 1G:
group limited_group {
   memory {
    memory.limit_in_bytes = 1G;
   }
  }

6. Edit the /etc/cgrules.conf file. Assign Application aaaa1 to limited group
  :aaaa1 containment limited_group/

7. Restart service to apply changes 
#service cgconfig restart

8. Verify that the cgroup is created and the “aaaa1” application is limited: 
#cgget -g memory:/limited_group/aaaa1

9. Easy, right?