DNS, Gateway, Router Setup for Bhyve & iocage: TOTAL CONTAINERIZATION

root@bean     1.15   0%   ~  cat /etc/pf.conf                                                                                                                              210

#

THINKS TO SELF: Hrm, why yes, that is a $BOOTAY_KICKING prompt! I need to document it actually…later…
# Instant NAT
nat pass on ix0 from {172.16.0.0/24} to any -> (ix0)

# Better NAT/RDR
# Define the interfaces
ext_if = "ix0"
int_if = "bridge0"
tcp_svcs = "{ 22 2200 80 443 5000:6000 8000:9001 10000 }"
#container_net = $int_if:network

# Define the IP address of containers & ports for rdr/nat
FNASVM = "172.16.0.230"
FNASVM_TCP_PORTS = "{ 80, 443 }"

# Normalize packets & pass anything in TCP_SVCS
#scrub in all

# Define the NAT for the containers
nat on $ext_if from $int_if to any -> ($ext_if)

# FREENAS VM: Redirect traffic on ports 8180 and 8443
rdr pass on $ext_if proto tcp from any to any port 8180 -> $FNASVM port 80
rdr pass on $ext_if proto tcp from any to any port 8443 -> $FNASVM port 443

# Hrm, maybe quick is too fast
#pass in quick on $ext_if proto tcp from any to any port $tcp_svcs
pass in on $ext_if proto tcp from any to any port $tcp_svcs

Managing FreeBSD Bhyve Containers With VM-BHYVE

SNAPSHOTS, CLONES, AND ROLLBACKS, OH MY!

One of the awesome FreeBSD tools I use frequently:

 vm clone name[@snapshot] new-name
 vm snapshot [-f] name|name@snapshot

Later, if you like, you can restore a previous snapshot of your vm:

    rollback [-r] <name@snapshot>

VM-BHYVE SNAPSHOT: Easy as Pie 😉

It’s best to make sure the conta`iner is powered-off:

    vm poweroff $name

Now, we can make the snapshot…

    root@bean   ~  vm snapshot fnas11vm                           2089

“Trust But Verify”

— Ronald Reagan

    root@bean   ~  zfs list -t snap | grep fnas11vm        1 ↵     2090

    NAME                                            USED  AVAIL  REFER  MOUNTPOINT
    zroot/vm/fnas11vm@2018-01-02-12:38:07              0      -    96K  -
    zroot/vm/fnas11vm/disk0@2018-01-02-12:38:07        0      -  1.21G  -
    zroot/vm/fnas11vm/disk1@2018-01-02-12:38:07        0      -  7.53M  -

Creating an image from the container for provisioning more containers!

    root@bean   ~  vm image create -d 'fnas11_image' fnas11vm        2099

    Creating a compressed image, this may take some time... 
    Image of fnas11vm created with UUID 650759c6-efff-11e7-8013-0cc47ac2a6ec

FIGHTING WITH FONTS!? REALLY!? Ok, this is BadASSDOM!

Sweet FreeBSD ZSH/POWERLINE9k CONSOLE PROMPT WITH AN OS ICON

Movie Reference: Knights of BaddAssDom

The Goal:

Requirements:
  • powerline-status
  • powerline-fonts
  • patience of Ghandi
  • tenacity of a door-to-door salesperson

    > vi kde4/share/apps/konsole/Shell.profile[+]

:!ls /usr/local/share/fonts/Droid/Droid\ Sans\ Mono\ for\ Powerline\ Nerd\ Font\ Complete.otf                                   

 .k/s/a/k/Shell.profile+                                                                                                   
[Appearance]
AntiAliasFonts=true

ColorScheme=GreenOnBlack

 #Font=Source Code Pro for Powerline,15,-1,5,63,0,0,0,0,0
 Font=Droid Sans Mono for Powerline Nerd Font Complete,15,-1,5,63,0,0,0,0,0

Mirrored ZPOOL with ZFS Boot on Root FAIL … and Fix

matto@spock: cat /projects_share/docs/zfs_zpool-attachdrive.txt

iX SpecOps: MIRRORED ZPOOL ROOT/BOOT FIX

Description: SINGLE VDEV BOOT ZPOOL FOR PRODUCTION SYSTEM

The problem with Prometheus is that ada4 had another pool on it, tank1. Hence the tank2 zpool name, I suspect. Since somebody did not destroy this root zpool with bootcode when bootcode & gpt partitioning
was done on the Intel ada0/1 pair, when the Intel RAID broke or was not discovered first, the system booted from the broken ada3s2 partition.

In fact, chances are that this occured many times & nearly every device in the system, which is full of unmatched drives of various sizes with no apparently plan.

        zpool import                                     
           pool: tank1
             id: 13323130829716330915
          state: ONLINE
         status: The pool was last accessed by another system.
         action: The pool can be imported using its name or numeric identifier and
             the '-f' flag.
           see: http://illumos.org/msg/ZFS-8000-EY
         config:

                tank1       ONLINE
                  ada4p2    ONLINE

The Fix:

It was a correct assumption to attempt adding the device to the existing 30GB zpool, called tank2. Various permutations of this approach were executed.

In addition, several methods were used to copy the existing partitions onto the second disk. This failed in the same fashion as directly attaching the drive to the existing zpool.

The errors seemed of little help.

        root@prometheus#> gpart backup /dev/ada3 | \
          gpart restore /dev/ada4
          gpart: geom 'ada4': Operation not permitted

        root@prometheus#> gpart backup /dev/ada3
          GPT 152
          1   freebsd-boot       40      512
          2   freebsd-zfs      552 58306560
          3   freebsd-swap 58307112  4194304

        root@prometheus#> part backup /dev/ada3 | \
          gpart restore /dev/ada4
        : Operation not permitted
        root@prometheus#> zpool attach $POOL_NAME ada3s2 ada4s2
        : Operation not permitted  # AND SO ON AND SO FORTH

## UNTIL:

   root@prometheus#> 
sysctl kern.geom.debugflags=0x10
   kern.geom.debugflags: 0 -> 16

    zpool attach -f tank2 /dev/ada3p2 /dev/ada4p2

Make sure to wait until resilver is done before rebooting.

If you boot from pool ‘tank2’, you may need to update boot code on newly attached disk ‘/dev/ada4p2’.

<br />     gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada4
     partcode written to ada4p1
     bootcode written to ada4
#################################### #### SUCCESS !
        zpool status tank2
        zsh: correct 'tank2' to 'tank' [nyae]? n
          pool: tank2
         state: ONLINE
        status: One or more devices is currently being resilvered.  The pool will
                continue to function, possibly in a degraded state.
        action: Wait for the resilver to complete.
          scan: resilver in progress since Thu Dec 14 19:25:29 2017
                31.2M scanned out of 23.1G at 201K/s, 33h26m to go
                30.2M resilvered, 0.13% done
        config:

                NAME        STATE     READ WRITE CKSUM
                tank2       ONLINE       0     0     0
                  mirror-0  ONLINE       0     0     0
                    ada3p2  ONLINE       0     0     0
                    ada4p2  ONLINE       0     0     0  (resilvering)