Steroid Hypervisor: FreeBSD + ZFS + cbsd

In this tutorial, I want to shed light on how easy and elegant it is to install FreeBSD in a server environment - on rented hardware or in your own data center, manually or using orchestration tools like Ansible. Disk encryption, convenient space management, a hypervisor for containers and full VMs, a convenient and intuitive firewall - this is all and not only it is available out of the box and takes little time to configure with the right approach.


Installation


Let's start with the simplest scenario. We have a piece of iron or it’s somewhere out there, but we can feed .iso (or .img) on ​​a CD-ROM or using IPMI. Or even simpler - the hoster offers the mfsbsd image directly from the interface. In any case, boot from mfsbsd .


Linux rescue CD


It happens that the hoster does not give FreeBSD in any form, but it does Linux Rescue. We boot into Linux, download the mfsbsd disk image and roll it onto the hard:


root@rescue:~# wget https://mfsbsd.vx.sk/files/images/12/amd64/mfsbsd-12.1-RELEASE-amd64.img


 root@rescue:~# dd if=mfsbsd-12.1-RELEASE-amd64.img of=/dev/sda bs=4M 22+1 records in 22+1 records out 92307456 bytes (92 MB) copied, 0.386325 s, 239 MB/s 

root@rescue:~# reboot


Network adapter setup


Everything is good when we have direct access to the monitor or receiving a signal over the network via IPMI / VNC / web. But what if there is no such possibility and only network connection is possible? To do this, we need an mfsbsd image with the network settings already set. Let's collect our individual image with the settings for the network adapter and the SSH daemon. To do this, we need some kind of FreeBSD host to build the image.


Download the original .iso


root@ns312777:~ # fetch https://download.freebsd.org/ftp/releases/amd64/amd64/ISO-IMAGES/12.0/FreeBSD-12.0-RELEASE-amd64-dvd1.iso


Mountim


root@ns312777:~ # mount -t cd9660 /dev/ mdconfig -f FreeBSD-12.0-RELEASE-amd64-dvd1.iso /root/cd-rom


Download mfsbsd


root@ns312777:~ # fetch https://github.com/mmatuska/mfsbsd/archive/master.zip


root@ns312777:~ # unzip master.zip && cd mfsbsd-master


We change configuration files. For IP configuration, just change to create conf/rc.conf from conf/rc.conf.sample . You should also create conf/authorized_keys and add your key.


root@ns312777:~/mfsbsd-master # ls -la conf/


 total 55 drwxr-xr-x 2 root wheel 13 Dec 8 10:06 . drwxr-xr-x 7 root wheel 13 Dec 8 10:11 .. -rw-r--r-- 1 root wheel 451 Dec 8 10:06 authorized_keys -rw-r--r-- 1 root wheel 50 Nov 30 22:54 authorized_keys.sample -rw-r--r-- 1 root wheel 3 Nov 30 22:54 boot.config.sample -rw-r--r-- 1 root wheel 156 Nov 30 22:54 hosts.sample -rw-r--r-- 1 root wheel 592 Nov 30 22:54 interfaces.conf.sample -rw-r--r-- 1 root wheel 1310 Nov 30 22:54 loader.conf.sample -rw-r--r-- 1 root wheel 739 Dec 8 10:06 rc.conf -rw-r--r-- 1 root wheel 689 Nov 30 22:54 rc.conf.sample -rw-r--r-- 1 root wheel 40 Nov 30 22:54 rc.local.sample -rw-r--r-- 1 root wheel 108 Nov 30 22:54 resolv.conf.sample -rw-r--r-- 1 root wheel 898 Nov 30 22:54 ttys.sample 

Well, let's go!


root@ns312777:~/mfsbsd-master # make BASE=/root/cd-rom/usr/freebsd-dist


 Extracting base and kernel ... done Removing selected files from distribution ... done Installing configuration scripts and files ... done Generating SSH host keys ... done Configuring boot environment ...x ./ x ./linker.hints x ./kernel done Installing pkgng ... done Compressing usr ... done Creating and compressing mfsroot ... done Creating image file ...87072+0 records in 87072+0 records out 89161728 bytes transferred in 0.905906 secs (98422727 bytes/sec) 87040+0 records in 87040+0 records out 89128960 bytes transferred in 0.877165 secs (101610229 bytes/sec) md3 created md3p1 added partcode written to md3p1 bootcode written to md3 md3p2 added Calculated size of `./mfsbsd-12.0-RELEASE-p10-amd64.img.a47DUe1j': 80281600 bytes, 65 inodes Extent size set to 32768 ./mfsbsd-12.0-RELEASE-p10-amd64.img.a47DUe1j: 76.6MB (156800 sectors) block size 32768, fragment size 4096 using 1 cylinder groups of 76.56MB, 2450 blks, 256 inodes. super-block backups (for fsck -b #) at: 64, Populating `./mfsbsd-12.0-RELEASE-p10-amd64.img.a47DUe1j' Image `./mfsbsd-12.0-RELEASE-p10-amd64.img.a47DUe1j' complete 612+1 records in 612+1 records out 80281600 bytes transferred in 36.018812 secs (2228880 bytes/sec) done 

Why mfsbsd?


This is a compact assembly of the FreeBSD OS that is loaded directly into memory, which means we can freely do what we want with all available media. Something like a Linux rescue CD. The whole charm of this assembly is that it allows you to install FreeBSD literally in one command.


After Linux rescue CD or boot from .iso / .img we get into the mfsbsd shell.


root@mfsbsd:~ # cd bin/


We clean disks


 root@mfsbsd:~/bin # gpart destroy -F /dev/ada0 root@mfsbsd:~/bin # gpart destroy -F /dev/ada1 

or


root@mfsbsd:~/bin # ./destroygeom -d /dev/ada0 -d /dev/ada1


 Destroying geom ada0: Deleting partition 1 ... done Deleting partition 2 ... done Destroying geom ada1: Deleting partition 1 ... done Deleting partition 2 ... done 

then install our system.


 root@mfsbsd:~/bin # ./zfsinstall Usage: ./zfsinstall [-h] -d geom_provider [-d geom_provider ...] [ -u dist_url ] [-r mirror|raidz] [-m mount_point] [-p zfs_pool_name] [-s swap_partition_size] [-z zfs_partition_size] [-c] [-C] [-l] [-4] [-A] 

because on hosts with up to 3 disks, I prefer to put the system in the mirror on all disks on a small partition. Thanks to ZFS, the remaining space can be configured as you like, but more on that later. Here she is the one and only team that installs the entire system.


root@mfsbsd:~/bin # ./zfsinstall -d /dev/ada0 -d /dev/ada1 -d /dev/ada2 -r mirror -p zsys -z 25G


to install the system on three disks in the mirror


 Fetching base files from: ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.0-RELEASE /tmp/base.txz 147 MB 3670 kBps 41s /tmp/kernel.txz 39 MB 2344 kBps 18s Creating GUID partitions on ada0 ... done Configuring ZFS bootcode on ada0 ... done => 40 937703008 ada0 GPT (447G) 40 472 1 freebsd-boot (236K) 512 52428800 2 freebsd-zfs (25G) 52429312 885273736 - free - (422G) Creating GUID partitions on ada1 ... done Configuring ZFS bootcode on ada1 ... done => 40 937703008 ada1 GPT (447G) 40 472 1 freebsd-boot (236K) 512 52428800 2 freebsd-zfs (25G) 52429312 885273736 - free - (422G) Creating GUID partitions on ada2 ... done Configuring ZFS bootcode on ada2 ... done => 40 937703008 ada2 GPT (447G) 40 472 1 freebsd-boot (236K) 512 52428800 2 freebsd-zfs (25G) 52429312 885273736 - free - (422G) Creating ZFS pool zsys on ada0p2 ada1p2 ada2p2 ... done Creating zsys root partition: ... done Creating zsys partitions: var tmp ... done Setting bootfs for zsys to zsys/root ... done NAME USED AVAIL REFER MOUNTPOINT zsys 712K 23.7G 88K none zsys/root 264K 23.7G 88K /mnt zsys/root/tmp 88K 23.7G 88K /mnt/tmp zsys/root/var 88K 23.7G 88K /mnt/var Extracting FreeBSD distribution ... done Writing /boot/loader.conf... done Writing /etc/fstab...Writing /etc/rc.conf... done Copying /boot/zfs/zpool.cache ... done Installation complete. The system will boot from ZFS with clean install on next reboot You may make adjustments to the installed system using chroot: chroot /mnt Some adjustments may require a mounted devfs: mount -t devfs devfs /mnt/dev WARNING - Don't export ZFS pool "zsys"! 

After installation, the file system of the installed OS is available in /mnt/ , so we make some configuration before going into reboot:


Copy the key already added to the mfsbsd image to the system


root@mfsbsd:~/bin # mkdir /mnt/root/.ssh && cp /root/.ssh/authorized_keys /mnt/root/.ssh/


We edit the SSH config (for example, change the port and allow login root )


root@mfsbsd:~/bin # ee /mnt/etc/ssh/sshd_config


edit the system bootloader file


root@mfsbsd:~/bin # ee /mnt/etc/rc.conf


 zfs_enable="YES" sshd_enable="YES" hostname="hyper.bitbsd.org" # # You need a gateway defined for a working network setup defaultrouter="37.79.8.254" ifconfig_em0="inet 37.79.8.111 netmask 255.255.255.0" 

Pay attention to the manufacturer of your network adept, as the name of the interface depends on it. In some cases, you can do this:


ifconfig_DEFAULT="DHCP" in /etc/rc.conf


We expose DNS


root@mfsbsd:~/bin # ee /mnt/etc/resolv.conf


 nameserver 8.8.8.8 

Well, you can generally do so


root@mfsbsd:~ # chroot /mnt


and add a user and generally do everything from the environment of a freshly installed OS.


Well, we drove on


root@mfsbsd:~ # reboot


More red-eyed


Before proceeding with the configuration of the installed OS, I want to note two more ways to boot into mfsbsd.


Standard FreeBSD Installer:



Choosing a Live CD


Create memory-disk


 # mdconfig -a -t swap -s 2g -u 9 # newfs -U md9 # mount /dev/md9 /tmp # cd /tmp 

A disk in memory is not necessary for working with mfsbsd files; moreover, mfsbsd uses /tmp/ during installation, so there should be enough free space there.


# fetch https://github.com/mmatuska/mfsbsd/archive/master.zip


# unzip master.zip && cd mfsbsd-master/tools


Well, then the same thing - ./destroygeom and ./zfsinstall


Installation from a pre-installed Linux image:


If the server provider gives you already installed Linux, then we can also "install" in our own way by changing the configuration of the GRUB bootloader . To do this, add the following lines to grub.cfg on the current system:


 menuentry "mfsbsd-10.0-RELEASE-amd64.iso" { # Path to the iso set isofile=/boot/boot-isos/mfsbsd-12.0-RELEASE-amd64.iso # (hd0,1) here may need to be adjusted of course depending where the partition is loopback loop (hd0,1)$isofile kfreebsd (loop)/boot/kernel/kernel.gz -v # kfreebsd_loadenv (loop)/boot/device.hints # kfreebsd_module (loop)/boot/kernel/geom_uzip.ko kfreebsd_module (loop)/boot/kernel/ahci.ko kfreebsd_module (loop)/mfsroot.gz type=mfs_root set kFreeBSD.vfs.root.mountfrom="ufs:/dev/md0" set kFreeBSD.mfsbsd.autodhcp="YES" # Define a new root password # set kFreeBSD.mfsbsd.rootpw="foobar" # Alternatively define hashed root password # set kFreeBSD.mfsbsd.rootpwhash="" } 

Well, put .iso accordingly in /boot/boot-isos/


I never did it myself, so if you find a mistake or a flaw in the config, let me know.


Perhaps how else can I dodge to install FreeBSD, I do not know.


system configuration


Well, we have everything installed, loading ... To get started, we need to install and configure the network - in our case, pf will play the main role, I like this solution for its simplicity and clarity. We need to configure the server file system - we use ZFS because it is an incredibly flexible and efficient file system. When the network and FS are configured, we will install the hypervisor. If we need an increased level of security, you can configure additional utilities , I will omit their description in this tutorial. As a demonstration of the flexibility of ZFS, I will describe the configuration of two servers, one with 2 disks and one with 3 disks. Also, do not forget to upgrade the system after installation, sometimes it turns out that only something ancient is available for installation.


freebsd-update -r 12.1-RELEASE upgrade


At the time of this writing, 12.1 is the latest version


Dual disk server


We created it in the following configuration:


 root@:~ # gpart show /dev/ada0 => 40 5860533088 ada0 GPT (2.7T) 40 472 1 freebsd-boot (236K) 512 8388608 2 freebsd-swap (4.0G) 8389120 167772160 3 freebsd-zfs (80G) 176161280 2097152000 4 freebsd-zfs (1.0T) 2273313280 3587219848 5 freebsd-zfs (1.7T) root@:~ # gpart show /dev/ada1 => 40 5860533088 ada1 GPT (2.7T) 40 472 1 freebsd-boot (236K) 512 8388608 2 freebsd-swap (4.0G) 8389120 167772160 3 freebsd-zfs (80G) 176161280 2097152000 4 freebsd-zfs (1.0T) 2273313280 3587219848 5 freebsd-zfs (1.7T) 

 root@:~ # zpool status pool: appdata state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM appdata ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p4.eli ONLINE 0 0 0 ada1p4.eli ONLINE 0 0 0 errors: No known data errors pool: miscdata state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM miscdata ONLINE 0 0 0 ada0p5.eli ONLINE 0 0 0 ada1p5.eli ONLINE 0 0 0 errors: No known data errors pool: zsys state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zsys ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 errors: No known data errors 

What does all this mean? We have a piece of iron with two disks, these two disks are both 3 TB in size. We are going to launch several applications on the server, some of them are conditionally critical, some are β€œfor playing around”, so it’s not a pity to lose data. For the most efficient disk space configuration. What do we get as a result of short manipulations?


 root@:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT appdata 75.5K 961G 23K /appdata miscdata 75.5K 3.21T 23K /miscdata zsys 2.02G 75.0G 88K none zsys/root 2.02G 75.0G 1.18G / zsys/root/tmp 88K 75.0G 88K /tmp zsys/root/var 862M 75.0G 862M /var 

The appdata pool (mirror from the /dev/ada0p4.eli and /dev/ada1p4.eli sections) for application data of critical importance, the miscdata pool ("stripe", that is, the data on /dev/ada0p5.eli and / dev / ada1p5.eli) for all nonsense. There is also a system pool and swap, these are also mirrors from partitions of both disks.


Total for our applications we have 1TB of mirrored space and 3.2TB for experiments.


How to set it all up? Not difficult. After the initial installation, our disk looks something like this:


 root@:~ # gpart show /dev/ada0 => 40 5860533088 ada0 GPT (2.7T) 40 472 1 freebsd-boot (236K) 512 8388608 2 freebsd-swap (4.0G) 8389120 167772160 3 freebsd-zfs (80G) 176161280 684371848 free (2.7T) 

We add sections to disks, at first that we will mirror


root@:~ # gpart add -t freebsd-zfs -s 1000g /dev/ada0


root@:~ # gpart add -t freebsd-zfs -s 1000g /dev/ada1


and then everything else


root@:~ # gpart add -t freebsd-zfs /dev/ada0


root@:~ # gpart add -t freebsd-zfs /dev/ada1


we now have the / dev / ada0p4, / dev / ada0p5, / dev / ada1p4, and / dev / ada0p5 partitions. Because we do not have physical access to the server at the hoster in the DC, it is advisable to encrypt sections with our data.


root@:~ # geli init /dev/ada0p4


root@:~ # geli init /dev/ada0p5


root@:~ # geli init /dev/ada1p4


root@:~ # geli init /dev/ada1p5


enter the encryption password, then decrypt the sections:


root@:~ # geli attach /dev/ada0p4


root@:~ # geli attach /dev/ada0p5


root@:~ # geli attach /dev/ada1p4


root@:~ # geli attach /dev/ada1p5


now we have sections /dev/ada0p4.eli, /dev/ada0p5.eli, /dev/ada1p4.eli and /dev/ada0p5.eli, we can build pools from these sections


You can script this:


 root@:~ # cat /root/attach_disks.sh #!/bin/sh geli attach /dev/ada0p4 geli attach /dev/ada0p5 geli attach /dev/ada1p4 geli attach /dev/ada1p5 

root@:~ # zpool create appdata mirror /dev/ada0p4.eli /dev/ada1p4.eli


and for data without mirroring


root@:~ # zpool create miscdata /dev/ada0p5.eli /dev/ada1p5.eli


Well, our server is ready for battle. We have pools that we can expand in the future by adding extra. drives. And there are snapshots, how to use them I will describe a little lower.


Server with three drives


Here a slightly different config. We have a piece of hardware with three 450GB SSDs. This server will be used mainly for full virtualization, so we need as much space as possible, but it is possible to continue working after the loss of one of the disks. After the initial installation, our disks look like this:


root@:~ # gpart show /dev/ada[0-9]


 => 40 937703008 ada0 GPT (447G) 40 472 1 freebsd-boot (236K) 512 52428800 2 freebsd-zfs (25G) 52429312 885273736 - free - (422G) => 40 937703008 ada1 GPT (447G) 40 472 1 freebsd-boot (236K) 512 52428800 2 freebsd-zfs (25G) 52429312 885273736 - free - (422G) => 40 937703008 ada2 GPT (447G) 40 472 1 freebsd-boot (236K) 512 52428800 2 freebsd-zfs (25G) 52429312 885273736 - free - (422G) 

We create 420 GB partitions (until the end of the disk we do not create, because when replacing the disk, its actual size may vary slightly)


 root@hyper:~ # gpart add -t freebsd-zfs -s 420g /dev/ada0 ada0p3 added root@hyper:~ # gpart add -t freebsd-zfs -s 420g /dev/ada1 ada1p3 added root@hyper:~ # gpart add -t freebsd-zfs -s 420g /dev/ada2 ada2p3 added 

We encrypt


 root@hyper:~ # geli init /dev/ada0p3 Enter new passphrase: Reenter new passphrase: Metadata backup for provider /dev/ada0p3 can be found in /var/backups/ada0p3.eli and can be restored with the following command: # geli restore /var/backups/ada0p3.eli /dev/ada0p3 root@hyper:~ # geli init /dev/ada1p3 Enter new passphrase: Reenter new passphrase: Metadata backup for provider /dev/ada1p3 can be found in /var/backups/ada1p3.eli and can be restored with the following command: # geli restore /var/backups/ada1p3.eli /dev/ada1p3 root@hyper:~ # geli init /dev/ada2p3 Enter new passphrase: Reenter new passphrase: Metadata backup for provider /dev/ada2p3 can be found in /var/backups/ada2p3.eli and can be restored with the following command: # geli restore /var/backups/ada2p3.eli /dev/ada2p3 

What is Metadata backup?


This is a file with which you can reset the encryption password to the one we entered. Well, this is in case if later I changed and forgot. In any case, it is worth bearing in mind this nuance.


Attach disks and create a pool


 root@hyper:~ # geli attach /dev/ada0p3 Enter passphrase: root@hyper:~ # geli attach /dev/ada1p3 Enter passphrase: root@hyper:~ # geli attach /dev/ada2p3 Enter passphrase: root@hyper:~ # zpool create safestore raidz1 /dev/ada0p3.eli /dev/ada1p3.eli /dev/ada2p3.eli 

and at the output we get a pool with 820 GB of disk space


 root@hyper:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT safestore 89.2K 810G 29.3K /safestore ... 

It is possible to collect ZFS pools and different disks of different configurations as you like. This is a kind of storage designer that can be expanded forever. ZFS creates an abstraction layer over devices that allows you to achieve extremely amazing configurations.


Installation and configuration of a hypervisor


FreeBSD offers two guest OS solutions out of the box - jails and bhyve. There is cbsd , it is a wrapper for FreeBSD Jails, bhyve and XEN. In this guide I will not touch on the latter, because never used it myself.


Cells are clones of the FreeBSD system. A very convenient tool for isolating processes. The host system can host cells of any version below its own. Cells can be transferred between hosts, snapshot and roll back even on the fly. It is worth using klektki when possible, thus optimally using the system resource without spending it on full virtualization.


image


bhyve, in turn, is a full-fledged hypervisor with which you can raise all sorts of Ubuntu and dockers.


image


root@:~ # pkg install cbsd


It is worth noting that at the time of writing, cbsd pulls only 11 dependencies of size 35 MB.


 New packages to be INSTALLED: cbsd: 12.1.2 sudo: 1.8.28 gettext-runtime: 0.20.1 indexinfo: 0.3.1 libssh2: 1.8.2,3 ca_root_nss: 3.47.1 rsync: 3.1.3_1 libiconv: 1.14_11 pkgconf: 1.6.3,1 libedit: 3.1.20190324,1 sqlite3: 3.29.0 readline: 8.0.0 Number of packages to be installed: 12 The process will require 33 MiB more space. 8 MiB to be downloaded. Proceed with this action? [y/N]: y 

create a separate section on the pool for cells


root@:~ # zfs create miscdata/jails


initialize cbsd


root@:~ # env workdir="/miscdata/jails" /usr/local/cbsd/sudoexec/initenv


mnogabukaf
 -------[CBSD v.12.1.2]------- This is install/upgrade scripts for CBSD. Don't forget to backup. ----------------------------- Do you want prepare or upgrade hier environment for CBSD now? [yes(1) or no(0)] 1 >>> Installing or upgrading [Stage 1: account & dir hier] * Check hier and permission... ./.rssh missing (created) ./.ssh missing (created) ./.ssh/sockets missing (created) ./basejail missing (created) ./etc missing (created) ./etc/defaults missing (created) ./export missing (created) ./ftmp missing (created) ./import missing (created) ./jails missing (created) ./jails-data missing (created) ./jails-fstab missing (created) ./jails-rcconf missing (created) ./jails-system missing (created) ./share missing (created) ./share/dialog missing (created) ./share/helpers missing (created) ./share/FreeBSD-jail-puppet-skel missing (created) ./share/FreeBSD-jail-skel missing (created) ./share/FreeBSD-jail-vnet-skel missing (created) ./share/emulators missing (created) ./src missing (created) ./tmp missing (created) ./var missing (created) ./var/cron missing (created) ./var/cron/tabs missing (created) ./var/db missing (created) ./var/log missing (created) ./var/mail missing (created) ./var/run missing (created) ./var/spool missing (created) * write directory id: jaildatadir * write directory id: jailsysdir * write directory id: jailrcconfdir * write directory id: dbdir [Stage 2: build tools] Shall i add cbsd user into /usr/local/etc/sudoers.d/cbsd_sudoers sudo file to obtain root privileges for the most cbsd commands? [yes(1) or no(0)] 1 [Stage 3: local settings] Shall i modify the /etc/rc.conf to sets cbsd_workdir="/miscdata/jails"?: [yes(1) or no(0)] 1 /etc/rc.conf: cbsd_workdir: -> /miscdata/jails [Stage 4: update default skel resolv.conf] [Stage 5: refreshing inventory] nodename: CBSD Nodename for this host eg the hostname. Warning: this operation will recreate the ssh keys in /miscdata/jails/.ssh dir: hostname.org Empty inventory database created: /miscdata/jails/var/db/inv.hostname.org.sqlite nodeip: Node management IPv4 or IPv6 address (used for node interconnection), eg: 151.106.27.106 jnameserver: Jails default DNS name-server (for jails resolv.conf), eg: 9.9.9.9,149.112.112.112 8.8.8.8,8.8.4.4 nodeippool: Jail pool IP address range (networks for jails) Hint: use space as delimiter for multiple networks, eg: 10.0.0.0/16 151.106.27.106/24 192.168.0.0/24 nat_enable: Enable NAT for RFC1918 networks? [yes(1) or no(0)] 0 fbsdrepo: Use official FreeBSD repository? When no (0) the repository of CBSD is preferred (useful for stable=1) for fetching base/kernel? [yes(1) or no(0)] 1 zfsfeat: You are running on a ZFS-based system. Enable ZFS feature? [yes(1) or no(0)] 1 parallel: Parallel mode stop/start ? (0 - no parallel or positive value (in seconds) as timeout for next parallel sequence) eg: 5 0 stable: Use STABLE branch instead of RELEASE by default? Attention: only the CBSD repository has a binary base for STABLE branch ? (STABLE_X instead of RELEASE_X_Y branch for base/kernel will be used), eg: 0 (use release) 0 sqlreplica: Enable sqlite3 replication to remote nodes ? (0 - no replica, 1 - try to replicate all local events to remote nodes) eg: 1 0 statsd_bhyve_enable: Configure CBSD statsd services for collect RACCT bhyve statistics? ? (EXPERIMENTAL FEATURE)? eg: 0 0 statsd_jail_enable: Configure CBSD statsd services for collect RACCT jail statistics? ? (EXPERIMENTAL FEATURE)? eg: 0 0 statsd_hoster_enable: Configure CBSD statsd services for collect RACCT hoster statistics? ? (EXPERIMENTAL FEATURE)? eg: 0 0 [Stage 6: authentication keys] Generating public/private rsa key pair. Your identification has been saved in /miscdata/jails/.ssh/8a3574aa0ec0ad3056e7dcf0f48adb01.id_rsa. Your public key has been saved in /miscdata/jails/.ssh/8a3574aa0ec0ad3056e7dcf0f48adb01.id_rsa.pub. The key fingerprint is: SHA256:bZM/lo6lx40vE48MxZea1KQMKYIBq3HyPWQrF0xn980 root@hostname.org The key's randomart image is: +---[RSA 2048]----+ | ..ooo . . | | +.o....oo . | |oo = . ..+E+ . | | * + o . .* + | |. o = S =o + | | o . ..o+. | | +** | | *Oo | | o..+. | +----[SHA256]-----+ [Stage 7: modules] Installing module pkg.d cmd: pkg Installing module bsdconf.d cmd: tzsetup Installing module bsdconf.d cmd: ssh Installing module bsdconf.d cmd: ftp Installing module bsdconf.d cmd: adduser Installing module bsdconf.d cmd: passwd Installing module bsdconf.d cmd: service Installing module bsdconf.d cmd: sysrc Installing module bsdconf.d cmd: userlist Installing module bsdconf.d cmd: grouplist Installing module bsdconf.d cmd: adduser-tui Installing module bsdconf.d cmd: pw Installing module bsdconf.d cmd: cloudinit Installing module zfsinstall.d cmd: zfsinstall [Stage 9: cleanup] * Remove obsolete files... Configure RSYNC services for jail migration? [yes(1) or no(0)] 0 cbsdrsyncd_enable: -> YES Do you want to enable RACCT feature for resource accounting? [yes(1) or no(0)] 0 Shall i modify the /etc/rc.conf to sets cbsdd_enable=YES ? [yes(1) or no(0)] 0 cbsdd_enable: -> NO Shall i modify the /etc/rc.conf to sets rcshutdown_timeout="900"? [yes(1) or no(0)] 0 rcshutdown_timeout: 90 -> 900 [Stage X: upgrading] * Insert default topology into vm_cpu_topology table * Insert small1 group into vmpackage table >>> Done First CBSD initialization complete. Now your can run: service cbsdd start to run CBSD services. For change initenv settings in next time, use: cbsd initenv-tui Also don't forget to execute: cbsd initenv every time when you upgrade CBSD version. preseedinit: Would you like a config for "cbsd init" preseed to be printed? [yes(1) or no(0)] 0 

Run the cbsd daemon. We start onestart since we did not add cbsd to autorun at system startup, as sections with guest OS data are encrypted.


root@:~ # service cbsdd onestart


When the server boots up, you need to connect to it via SSH (which is why the system partition is not encrypted so that the system boots up successfully without any VNC / IPMI password entries) and decrypt our disk partitions with the ./root/attach_disks.sh script.


When initializing cbsd, we indicated that 192.168.0.0/24 would be the subnet for our cells. I plan to use the same subnet for full virtualization. We twist a couple of cells on the host, but before that we set up a fire wall so that NAT for the cells works through the host.


 root@:~ # sysrc pf_enable=YES pf_enable: NO -> YES root@:~ # service pf start /etc/rc.d/pf: WARNING: /etc/pf.conf is not readable. 

create a rule file for the firewall


root@:~ # ee /etc/pf.conf


 IF_PUBLIC="igb0" IP_PUBLIC="XXXX" JAIL_IP_POOL="192.168.0.0/24" icmp_types="echoreq" set limit { states 100000, frags 20000, src-nodes 20000 } set skip on lo0 scrub in all #NAT for others nat pass on $IF_PUBLIC from $JAIL_IP_POOL to any -> $IP_PUBLIC ## Jail HTTP/S port forward IP_JAIL="192.168.0.2" PORT_JAIL="{ 80, 443 }" rdr pass on $IF_PUBLIC proto tcp from any to $IP_PUBLIC port $PORT_JAIL -> $IP_JAIL 

load the rules


 root@:~ # service pf start Enabling pf. root@:~ # pfctl -f /etc/pf.conf 

Well, now we create cells


root@:~ # cbsd jconstruct-tui



Choose download from the base repository for cells



and a couple more cells via root@:~ # cbsd jconstruct-tui


then we start the cells


root@:~ # cbsd jstart webapp


and inside we already do what we want


root@:~ # cbsd jlogin webapp


for example install nginx
 webapp:/root@[12:58] # pkg install nginx Updating FreeBSD repository catalogue... FreeBSD repository is up to date. All repositories are up to date. Updating database digests format: 100% The following 2 package(s) will be affected (of 0 checked): New packages to be INSTALLED: nginx: 1.16.1_4,2 pcre: 8.43_2 Number of packages to be installed: 2 The process will require 8 MiB more space. 2 MiB to be downloaded. Proceed with this action? [y/N]: y [webapp.host.org] [1/2] Fetching nginx-1.16.1_4,2.txz: 100% 442 KiB 452.8kB/s 00:01 [webapp.host.org] [2/2] Fetching pcre-8.43_2.txz: 100% 1 MiB 638.0kB/s 00:02 Checking integrity... done (0 conflicting) [webapp.host.org] [1/2] Installing pcre-8.43_2... [webapp.host.org] [1/2] Extracting pcre-8.43_2: 100% [webapp.host.org] [2/2] Installing nginx-1.16.1_4,2... ===> Creating groups. Using existing group 'www'. ===> Creating users Using existing user 'www'. [webapp.host.org] [2/2] Extracting nginx-1.16.1_4,2: 100% ===== Message from nginx-1.16.1_4,2: -- Recent version of the NGINX introduces dynamic modules support. In FreeBSD ports tree this feature was enabled by default with the DSO knob. Several vendor's and third-party modules have been converted to dynamic modules. Unset the DSO knob builds an NGINX without dynamic modules support. To load a module at runtime, include the new `load_module' directive in the main context, specifying the path to the shared object file for the module, enclosed in quotation marks. When you reload the configuration or restart NGINX, the module is loaded in. It is possible to specify a path relative to the source directory, or a full path, please see https://www.nginx.com/blog/dynamic-modules-nginx-1-9-11/ and http://nginx.org/en/docs/ngx_core_module.html#load_module for details. Default path for the NGINX dynamic modules is /usr/local/libexec/nginx. webapp:/root@[12:59] # sysrc nginx_enable=YES nginx_enable: -> YES webapp:/root@[12:59] # service nginx start Performing sanity check on nginx configuration: nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok nginx: configuration file /usr/local/etc/nginx/nginx.conf test is successful Starting nginx. webapp:/root@[DING!] # sockstat -l4 USER COMMAND PID FD PROTO LOCAL ADDRESS FOREIGN ADDRESS www nginx 27982 6 tcp4 192.168.0.2:80 *:* root nginx 27981 6 tcp4 192.168.0.2:80 *:* root sendmail 25361 4 tcp4 192.168.0.2:25 *:* webapp:/root@[13:00] # 

This is what a typical cell zoo looks like:


 root@:~ # jls JID IP Address Hostname Path 1 192.168.0.1 tor.host.org /miscdata/jails/jails/tor 2 192.168.0.2 webapp.host.org /miscdata/jails/jails/webapp 3 192.168.0.3 bitcoind.host.org /miscdata/jails/jails/bitcoind 4 192.168.0.4 ethd.host.org /miscdata/jails/jails/ethd 

In the webapp cell, nginx listens to us, and it is accessible from the outside just by recording in /etc/pf.conf



. .


/etc/rc.conf


 ifconfig_igb0_alias="inet 192.168.0.1 netmask 255.255.255.0" gateway_enable="YES" 

 root@hyper:~ # echo 'vmm_load="YES"' >> /boot/loader.conf root@hyper:~ # echo 'kld_list="vmm if_tap if_bridge nmdm"' >> /etc/rc.conf root@hyper:~ # reboot 

, .


/etc/pf.conf

IF_PUBLIC="igb0"
IP_PUBLIC="YYYY"


JAIL_IP_POOL="192.168.0.0/24"


icmp_types="echoreq"


set limit { states 100000, frags 20000, src-nodes 20000 }
set skip on lo0
scrub in all


NAT for others


nat pass on $IF_PUBLIC from $JAIL_IP_POOL to any -> $IP_PUBLIC


Jail HTTP/S port forward


IP_JAIL="192.168.0.2"
PORT_JAIL="{ 80, 443 }"
rdr pass on $IF_PUBLIC proto tcp from any to $IP_PUBLIC port $PORT_JAIL -> $IP_JAIL


 root@hyper:~ # sysrc pf_enable=YES pf_enable: NO -> YES root@hyper:~ # service pf start Enabling pf. 

,


root@hyper:~ # cbsd bconstruct-tui



root@hyper:~ # cbsd bstart debian
init_systap: waiting for link: igb0
Looks like /safestore/jails/vm/debian/dsk1.vhd is empty.
May be you want to boot from CD?
[yes(1) or no(0)]
1
Temporary boot device: cd
vm_iso_path: iso-debian-10.1.0-amd64-DVD-1.iso
No such media: /safestore/jails/src/iso/cbsd-iso-debian-10.1.0-amd64-DVD-1.iso in /safestore/jails/src/iso
Shall i download it from: https://ftp.acc.umu.se/debian-cd/current/amd64/iso-dvd/ http://debian-cd.repulsive.eu/10.1.0/amd64/iso-dvd/ https://gensho.ftp.acc.umu.se/debian-cd/current/amd64/iso-dvd/ http://cdimage.debian.org/cdimage/release/10.1.0/amd64/iso-dvd/ http://debian.mirror.cambrium.nl/debian-cd/10.1.0/amd64/iso-dvd/ http://mirror.overthewire.com.au/debian-cd/10.1.0/amd64/iso-dvd/ http://ftp.crifo.org/debian-cd/10.1.0/amd64/iso-dvd/ http://debian.cse.msu.edu/debian-cd/10.1.0/amd64/iso-dvd/ ?
[yes(1) or no(0)]
1
Download to: /safestore/jails/src/iso/cbsd-iso-debian-10.1.0-amd64-DVD-1.iso
Scanning for fastest mirror…
Mirror source: Bytes per 3sec:



VNC 5900, SSH


ssh hyperhost -L 5900:localhost:5900



. IP - 192.168.0.100.


/etc/pf.conf SSH


 ## VM SSH port forward IP_VM="192.168.0.100" PORT_VM="{ 22100 }" rdr pass on $IF_PUBLIC proto tcp from any to $IP_PUBLIC port $PORT_VM -> $IP_VM port 22 

. , .


 root@hyper:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT safestore 5.89G 804G 29.3K /safestore safestore/jails 5.89G 804G 4.46G /safestore/jails safestore/jails/debian 920M 804G 208K /safestore/jails/vm/debian safestore/jails/debian/dsk1.vhd 919M 804G 919M - safestore/jails/linuxjail 329M 804G 329M /safestore/jails/jails-data/linuxjail-data safestore/jails/nginx 89.2M 804G 89.2M /safestore/jails/jails-data/nginx-data safestore/jails/tor 117M 804G 117M /safestore/jails/jails-data/tor-data zsystem 1.85G 21.9G 88K none zsystem/root 1.85G 21.9G 1.20G / zsystem/root/tmp 120K 21.9G 120K /tmp zsystem/root/var 662M 21.9G 662M /var 

root@hyper:~ # zfs snap safestore/jails/debian/dsk1.vhd@fresh


. .


β€” Ansible


3, 1300 . .


/etc/ansible/hosts
 [hyper-debian-group] hyper-debian [hyper-group] hyper [hyper-group:vars] ansible_python_interpreter=/usr/local/bin/python3.7 

~/.ssh/config
 Host hyper Hostname YYYY User root Port 33696 Host hyper-debian Hostname YYYY User root Port 22100 

~/deploy-docker.yml
 - hosts: hyper-debian gather_facts: no tasks: - name: Install a list of misc packages apt: update_cache: yes pkg: - apt-transport-https - ca-certificates - curl - gnupg2 - software-properties-common - name: Add Docker repo shell: curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - - name: Add Docker repo shell: apt-key fingerprint 0EBFCD88 - name: Do some linux repo magic shell: add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" - name: Install Docker CE suite & composer apt: update_cache: yes pkg: - docker-ce - docker-ce-cli - containerd.io - docker-compose - name: Update all packages to the latest version apt: upgrade: dist 

~/rollback-debian.yml
 - hosts: hyper gather_facts: no tasks: - name: stop debian shell: cbsd bstop debian - name: list ZFS datasets shell: zfs list | grep debian register: zfslist - debug: var=zfslist.stdout_lines - name: rollback debian shell: zfs rollback safestore/jails/debian/dsk1.vhd@fresh - name: list ZFS datasets shell: zfs list | grep debian register: zfslist - debug: var=zfslist.stdout_lines - name: start debian shell: cbsd bstart debian 

, TerraForm



 [user@localhost ~]$ ansible-playbook deploy-docker.yml PLAY [hyper-debian] ******************************************************************************************************* TASK [Install a list of misc packages] ********************************************************************************************* changed: [hyper-debian] TASK [Add Docker repo] ******************************************************************************************************** changed: [hyper-debian] TASK [Add Docker repo] ******************************************************************************************************** changed: [hyper-debian] TASK [Do some linux repo magic] ********************************************************************************************************changed: [hyper-debian] TASK [Install Docker CE suite & composer] ********************************************************************************************************changed: [hyper-debian] TASK [Update all packages to the latest version] ********************************************************************************************************ok: [hyper-debian] PLAY RECAP ********************************************************************************************************hyper-debian : ok=6 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 

, iRedMail.


 docker run --privileged -p 80:80 -p 443:443 \ -e "DOMAIN=example.com" -e "HOSTNAME=mail" \ -e "MYSQL_ROOT_PASSWORD=password" \ -e "SOGO_WORKERS=1" \ -e "TIMEZONE=Europe/Prague" \ -e "POSTMASTER_PASSWORD=password" \ -e "IREDAPD_PLUGINS=['reject_null_sender', 'reject_sender_login_mismatch', 'greylisting', 'throttle', 'amavisd_wblist', 'sql_alias_access_policy']" \ -v PATH/mysql:/var/lib/mysql \ -v PATH/vmail:/var/vmail \ -v PATH/clamav:/var/lib/clamav \ --name=iredmail lejmr/iredmail:mysql-latest 

? - Debian.


 [user@localhost ~]$ ansible-playbook rollback-debian.yml PLAY [hyper] ******************************************************************************************************** TASK [stop debian] ********************************************************************************************************changed: [hyper] TASK [list ZFS datasets] ********************************************************************************************************changed: [hyper] TASK [debug] ********************************************************************************************************ok: [hyper] => { "zfslist.stdout_lines": [ "safestore/jails/debian 2.65G 803G 206K /safestore/jails/vm/debian", "safestore/jails/debian/dsk1.vhd 2.65G 803G 2.34G -" ] } TASK [rollback debian] ********************************************************************************************************changed: [hyper] TASK [list ZFS datasets] ********************************************************************************************************changed: [hyper] TASK [debug] ********************************************************************************************************ok: [hyper] => { "zfslist.stdout_lines": [ "safestore/jails/debian 920M 804G 206K /safestore/jails/vm/debian", "safestore/jails/debian/dsk1.vhd 919M 804G 919M -" ] } TASK [start debian] ********************************************************************************************************changed: [hyper] PLAY RECAP ********************************************************************************************************hyper : ok=7 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 

, .


, , 1baysxTdXkwZnBosDdL1veb2zWDo6DC5b


!

Source: https://habr.com/ru/post/479192/


All Articles