netcat – Jupiter Broadcasting https://www.jupiterbroadcasting.com Open Source Entertainment, on Demand. Fri, 16 Nov 2018 03:38:29 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 https://original.jupiterbroadcasting.net/wp-content/uploads/2019/04/cropped-favicon-32x32.png netcat – Jupiter Broadcasting https://www.jupiterbroadcasting.com 32 32 Detain the bhyve | BSD Now 272 https://original.jupiterbroadcasting.net/128086/detain-the-bhyve-bsd-now-272/ Thu, 15 Nov 2018 19:38:29 +0000 https://original.jupiterbroadcasting.net/?p=128086 ##Headlines ###The byproducts of reading OpenBSD netcat code When I took part in a training last year, I heard about netcat for the first time. During that class, the tutor showed some hacks and tricks of using netcat which appealed to me and motivated me to learn the guts of it. Fortunately, in the past […]

The post Detain the bhyve | BSD Now 272 first appeared on Jupiter Broadcasting.

]]>

##Headlines
###The byproducts of reading OpenBSD netcat code

When I took part in a training last year, I heard about netcat for the first time. During that class, the tutor showed some hacks and tricks of using netcat which appealed to me and motivated me to learn the guts of it. Fortunately, in the past 2 months, I was not so busy that I can spend my spare time to dive into OpenBSD‘s netcat source code, and got abundant byproducts during this process.
(1) Brush up socket programming. I wrote my first network application more than 10 years ago, and always think the socket APIs are marvelous. Just ~10 functions (socket, bind, listen, accept…) with some IO multiplexing buddies (select, poll, epoll…) connect the whole world, wonderful! From that time, I developed a habit that is when touching a new programming language, network programming is an essential exercise. Even though I don’t write socket related code now, reading netcat socket code indeed refresh my knowledge and teach me new stuff.
(2) Write a tutorial about netcat. I am mediocre programmer and will forget things when I don’t use it for a long time. So I just take notes of what I think is useful. IMHO, this “tutorial” doesn’t really mean teach others something, but just a journal which I can refer when I need in the future.
(3) Submit patches to netcat. During reading code, I also found bugs and some enhancements. Though trivial contributions to OpenBSD, I am still happy and enjoy it.
(4) Implement a C++ encapsulation of libtls. OpenBSD‘s netcat supports tls/ssl connection, but it needs you take full care of resource management (memory, socket, etc), otherwise a small mistake can lead to resource leak which is fatal for long-live applications (In fact, the two bugs I reported to OpenBSD are all related resource leak). Therefore I develop a simple C++ library which wraps the libtls and hope it can free developer from this troublesome problem and put more energy in application logic part.
Long story to short, reading classical source code is a rewarding process, and you can consider to try it yourself.


###What I learned from porting my projects to FreeBSD

  • Introduction

I set up a local FreeBSD VirtualBox VM to test something, and it seems to work very well. Due to the novelty factor, I decided to get my software projects to build and pass the tests there.


##News Roundup
###OpenBSD’s unveil()

One of the key aspects of hardening the user-space side of an operating system is to provide mechanisms for restricting which parts of the filesystem hierarchy a given process can access. Linux has a number of mechanisms of varying capability and complexity for this purpose, but other kernels have taken a different approach. Over the last few months, OpenBSD has inaugurated a new system call named unveil() for this type of hardening that differs significantly from the mechanisms found in Linux.
The value of restricting access to the filesystem, from a security point of view, is fairly obvious. A compromised process cannot exfiltrate data that it cannot read, and it cannot corrupt files that it cannot write. Preventing unwanted access is, of course, the purpose of the permissions bits attached to every file, but permissions fall short in an important way: just because a particular user has access to a given file does not necessarily imply that every program run by that user should also have access to that file. There is no reason why your PDF viewer should be able to read your SSH keys, for example. Relying on just the permission bits makes it easy for a compromised process to access files that have nothing to do with that process’s actual job.
In a Linux system, there are many ways of trying to restrict that access; that is one of the purposes behind the Linux security module (LSM) architecture, for example. The SELinux LSM uses a complex matrix of labels and roles to make access-control decisions. The AppArmor LSM, instead, uses a relatively simple table of permissible pathnames associated with each application; that approach was highly controversial when AppArmor was first merged, and is still looked down upon by some security developers. Mount namespaces can be used to create a special view of the filesystem hierarchy for a set of processes, rendering much of that hierarchy invisible and, thus, inaccessible. The seccomp mechanism can be used to make decisions on attempts by a process to access files, but that approach is complex and error-prone. Yet another approach can be seen in the Qubes OS distribution, which runs applications in virtual machines to strictly control what they can access.
Compared to many of the options found in Linux, unveil() is an exercise in simplicity. This system call, introduced in July, has this prototype:

int unveil(const char *path, const char *permissions);

A process that has never called unveil() has full access to the filesystem hierarchy, modulo the usual file permissions and any restrictions that may have been applied by calling pledge(). Calling unveil() for the first time will “drop a veil” across the entire filesystem, rendering the whole thing invisible to the process, with one exception: the file or directory hierarchy starting at path will be accessible with the given permissions. The permissions string can contain any of “r” for read access, “w” for write, “x” for execute, and “c” for the ability to create or remove the path.
Subsequent calls to unveil() will make other parts of the filesystem hierarchy accessible; the unveil() system call itself still has access to the entire hierarchy, so there is no problem with unveiling distinct subtrees that are, until the call is made, invisible to the process. If one unveil() call applies to a subtree of a hierarchy unveiled by another call, the permissions associated with the more specific call apply.
Calling unveil() with both arguments as null will block any further calls, setting the current view of the filesystem in stone. Calls to unveil() can also be blocked using pledge(). Either way, once the view of the filesystem has been set up appropriately, it is possible to lock it so that the process cannot expand its access in the future should it be taken over and turn hostile.
unveil() thus looks a bit like AppArmor, in that it is a path-based mechanism for restricting access to files. In either case, one must first study the program in question to gain a solid understanding of which files it needs to access before closing things down, or the program is likely to break. One significant difference (beyond the other sorts of behavior that AppArmor can control) is that AppArmor’s permissions are stored in an external policy file, while unveil() calls are made by the application itself. That approach keeps the access rules tightly tied to the application and easy for the developers to modify, but it also makes it harder for system administrators to change them without having to rebuild the application from source.
One can certainly aim a number of criticisms at unveil() — all of the complaints that have been leveled at path-based access control and more. But the simplicity of unveil() brings a certain kind of utility, as can be seen in the large number of OpenBSD applications that are being modified to use it. OpenBSD is gaining a base level of protection against unintended program behavior; while it is arguably possible to protect a Linux system to a much greater extent, the complexity of the mechanisms involved keeps that from happening in a lot of real-world deployments. There is a certain kind of virtue to simplicity in security mechanisms.


###NetBSD Virtual Machine Monitor (NVVM)

  • NetBSD Virtual Machine Monitor

The NVMM driver provides hardware-accelerated virtualization support on NetBSD. It is made of an ~MI frontend, to which MD backends can be plugged. A virtualization API is provided in libnvmm, that allows to easily create and manage virtual machines via NVMM. Two additional components are shipped as demonstrators, toyvirt and smallkern: the former is a toy virtualizer, that executes in a VM the 64bit ELF binary given as argument, the latter is an example of such binary.

  • Download

The source code of NVMM, plus the associated tools, can be downloaded here.

  • Technical details

NVMM can support up to 128 virtual machines, each having a maximum of 256 VCPUs and 4GB of RAM.
Each virtual machine is granted access to most of the CPU registers: the GPRs (obviously), the Segment Registers, the Control Registers, the Debug Registers, the FPU (x87 and SSE), and several MSRs.
Events can be injected in the virtual machines, to emulate device interrupts. A delay mechanism is used, and allows VMM software to schedule the interrupt right when the VCPU can receive it. NMIs can be injected as well, and use a similar mechanism.
The host must always be x86_64, but the guest has no constraint on the mode, so it can be x86_32, PAE, real mode, and so on.
The TSC of each VCPU is always re-based on the host CPU it is executing on, and is therefore guaranteed to increase regardless of the host CPU. However, it may not increase monotonically, because it is not possible to fully hide the host effects on the guest during #VMEXITs.
When there are more VCPUs than the host TLB can deal with, NVMM uses a shared ASID, and flushes the shared-ASID VCPUs on each VM switch.
The different intercepts are configured in such a way that they cover everything that needs to be emulated. In particular, the LAPIC can be emulated by VMM software, by intercepting reads/writes to the LAPIC page in memory, and monitoring changes to CR8 in the exit state.


###What ‘dependency’ means in Unix init systems is underspecified (utoronto.ca)

I was reading Davin McCall’s On the vagaries of init systems (via) when I ran across the following, about the relationship between various daemons (services, etc):
I do not see any compelling reason for having ordering relationships without actual dependency, as both Nosh and Systemd provide for. In comparison, Dinit’s dependencies also imply an ordering, which obviates the need to list a dependency twice in the service description.
Well, this may be an easy one but it depends on what an init system means by ‘dependency’. Let’s consider ®syslog and the SSH daemon. I want the syslog daemon to be started before the SSH daemon is started, so that the SSH daemon can log things to it from the beginning. However, I very much do not want the SSH daemon to not be started (or to be shut down) if the syslog daemon fails to start or later fails. If syslog fails, I still want the SSH daemon to be there so that I can perhaps SSH in to the machine and fix the problem.
This is generally true of almost all daemons; I want them to start after syslog, so that they can syslog things, but I almost never want them to not be running if syslog failed. (And if for some reason syslog is not configured to start, I want enabling and starting, say, SSH, to also enable and start the syslog daemon.)
In general, there are three different relationships between services that I tend to encounter:

  • a hard requirement, where service B is useless or dangerous without service A. For instance, many NFS v2 and NFS v3 daemons basically don’t function without the RPC portmapper alive and active. On any number of systems, firewall rules being in place are a hard requirement to start most network services; you would rather your network services not start at all than that they start without your defenses in place.

  • a want, where service B wants service A to be running before B starts up, and service A should be started even if it wouldn’t otherwise be, but the failure of A still leaves B functional. Many daemons want the syslog daemon to be started before they start but will run without it, and often you want them to do so so that at least some of the system works even if there is, say, a corrupt syslog configuration file that causes the daemon to error out on start. (But some environments want to hard-fail if they can’t collect security related logging information, so they might make rsyslogd a requirement instead of a want.)

  • an ordering, where if service A is going to be started, B wants to start after it (or before it), but B isn’t otherwise calling for A to be started. We have some of these in our systems, where we need NFS mounts done before cron starts and runs people’s @reboot jobs but neither cron nor NFS mounts exactly or explicitly want each other. (The system as a whole wants both, but that’s a different thing.)

Given these different relationships and the implications for what the init system should do in different situations, talking about ‘dependency’ in it systems is kind of underspecified. What sort of dependency? What happens if one service doesn’t start or fails later?
My impression is that generally people pick a want relationship as the default meaning for init system ‘dependency’. Usually this is fine; most services aren’t actively dangerous if one of their declared dependencies fails to start, and it’s generally harmless on any particular system to force a want instead of an ordering relationship because you’re going to be starting everything anyway.

  • (In my example, you might as well say that cron on the systems in question wants NFS mounts. There is no difference in practice; we already always want to do NFS mounts and start cron.)

###Jailing The bhyve Hypervisor

As FreeBSD nears the final 12.0-RELEASE release engineering cycles, I’d like to take a moment to document a cool new feature coming in 12: jailed bhyve.
You may notice that I use HardenedBSD instead of FreeBSD in this article. There is no functional difference in bhyve on HardenedBSD versus bhyve on FreeBSD. The only difference between HardenedBSD and FreeBSD is the aditional security offered by HardenedBSD.
The steps I outline here work for both FreeBSD and HardenedBSD. These are the bare minimum steps, no extra work needed for either FreeBSD or HardenedBSD.

  • A Gentle History Lesson

At work in my spare time, I’m helping develop a malware lab. Due to the nature of the beast, we would like to use bhyve on HardenedBSD. Starting with HardenedBSD 12, non-Cross-DSO CFI, SafeStack, Capsicum, ASLR, and strict W^X are all applied to bhyve, making it an extremely hardened hypervisor.
So, the work to support jailed bhyve is sponsored by both HardenedBSD and my employer. We’ve also jointly worked on other bhyve hardening features, like protecting the VM’s address space using guard pages (mmap(…, MAP_GUARD, …)). Further work is being done in a project called “malhyve.” Only those modifications to bhyve/malhyve that make sense to upstream will be upstreamed.

  • Initial Setup

We will not go through the process of creating the jail’s filesystem. That process is documented in the FreeBSD Handbook. For UEFI guests, you will need to install the uefi-edk2-bhyve package inside the jail.
I network these jails with traditional jail networking. I have tested vnet jails with this setup, and that works fine, too. However, there is no real need to hook the jail up to any network so long as bhyve can access the tap device. In some cases, the VM might not need networking, in which case you can use a network-less VM in a network-less jail.
By default, access to the kernel side of bhyve is disabled within jails. We need to set allow.vmm in our jail.conf entry for the bhyve jail.

  • We will use the following in our jail, so we will need to set up devfs(8) rules for them:

  • A ZFS volume

  • A null-modem device (nmdm(4))

  • UEFI GOP (no devfs rule, but IP assigned to the jail)

  • A tap device

  • Conclusion

The bhyve hypervisor works great within a jail. When combined with HardenedBSD, bhyve is extremely hardened:

  • PaX ASLR is fully applied due to compilation as a Position-Independent Executable (HardenedBSD enhancement)
  • PaX NOEXEC is fully applied (strict W^X) (HardenedBSD enhancement)
  • Non-Cross-DSO CFI is fully applied (HardenedBSD enhancement)
  • Full RELRO (RELRO + BIND_NOW) is fully applied (HardenedBSD enhancement)
  • SafeStack is applied to the application (HardenedBSD enhancement)
  • Jailed (FreeBSD feature written by HardenedBSD)
  • Virtual memory protected with guard pages (FreeBSD feature written by HardenedBSD)
  • Capsicum is fully applied (FreeBSD feature)

Bad guys are going to have a hard time breaking out of the userland components of bhyve on HardenedBSD. 🙂


##Beastie Bits


##Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Detain the bhyve | BSD Now 272 first appeared on Jupiter Broadcasting.

]]>
Netcat Demystified | BSD Now 268 https://original.jupiterbroadcasting.net/127641/netcat-demystified-bsd-now-268/ Wed, 17 Oct 2018 06:28:48 +0000 https://original.jupiterbroadcasting.net/?p=127641 ##Headlines ###Six Metrics for Measuring ZFS Pool Performance Part 1 The layout of a ZFS storage pool has a significant impact on system performance under various workloads. Given the importance of picking the right configuration for your workload and the fact that making changes to an in-use ZFS pool is far from trivial, it is […]

The post Netcat Demystified | BSD Now 268 first appeared on Jupiter Broadcasting.

]]>

##Headlines
###Six Metrics for Measuring ZFS Pool Performance Part 1

The layout of a ZFS storage pool has a significant impact on system performance under various workloads. Given the importance of picking the right configuration for your workload and the fact that making changes to an in-use ZFS pool is far from trivial, it is important for an administrator to understand the mechanics of pool performance when designing a storage system.

  • To quantify pool performance, we will consider six primary metrics:
  • Read I/O operations per second (IOPS)
  • Write IOPS
  • Streaming read speed
  • Streaming write speed
  • Storage space efficiency (usable capacity after parity versus total raw capacity)
  • Fault tolerance (maximum number of drives that can fail before data loss)
  • For the sake of comparison, we’ll use an example system with 12 drives, each one sized at 6TB, and say that each drive does 100MB/s streaming reads and writes and can do 250 read and write IOPS. We will visualize how the data is spread across the drives by writing 12 multi-colored blocks, shown below. The blocks are written to the pool starting with the brown block on the left (number one), and working our way to the pink block on the right (number 12).

Note that when we calculate data rates and IOPS values for the example system, they are only approximations. Many other factors can impact pool access speeds for better (compression, caching) or worse (poor CPU performance, not enough memory).
There is no single configuration that maximizes all six metrics. Like so many things in life, our objective is to find an appropriate balance of the metrics to match a target workload. For example, a cold-storage backup system will likely want a pool configuration that emphasizes usable storage space and fault tolerance over the other data-rate focused metrics.
Let’s start with a quick review of ZFS storage pools before diving into specific configuration options. ZFS storage pools are comprised of one or more virtual devices, or vdevs. Each vdev is comprised of one or more storage providers, typically physical hard disks. All disk-level redundancy is configured at the vdev level. That is, the RAID layout is set on each vdev as opposed to on the storage pool. Data written to the storage pool is then striped across all the vdevs. Because pool data is striped across the vdevs, the loss of any one vdev means total pool failure. This is perhaps the single most important fact to keep in mind when designing a ZFS storage system. We will circle back to this point in the next post, but keep it in mind as we go through the vdev configuration options.
Because storage pools are made up of one or more vdevs with the pool data striped over the top, we’ll take a look at pool configuration in terms of various vdev configurations. There are three basic vdev configurations: striping, mirroring, and RAIDZ (which itself has three different varieties). The first section will cover striped and mirrored vdevs in this post; the second post will cover RAIDZ and some example scenarios.
A striped vdev is the simplest configuration. Each vdev consists of a single disk with no redundancy. When several of these single-disk, striped vdevs are combined into a single storage pool, the total usable storage space would be the sum of all the drives. When you write data to a pool made of striped vdevs, the data is broken into small chunks called “blocks” and distributed across all the disks in the pool. The blocks are written in “round-robin” sequence, meaning after all the disks receive one row of blocks, called a stripe, it loops back around and writes another stripe under the first. A striped pool has excellent performance and storage space efficiency, but absolutely zero fault tolerance. If even a single drive in the pool fails, the entire pool will fail and all data stored on that pool will be lost.
The excellent performance of a striped pool comes from the fact that all of the disks can work independently for all read and write operations. If you have a bunch of small read or write operations (IOPS), each disk can work independently to fetch the next block. For streaming reads and writes, each disk can fetch the next block in line synchronized with its neighbors. For example, if a given disk is fetching block n, its neighbor to the left can be fetching block n-1, and its neighbor to the right can be fetching block n+1. Therefore, the speed of all read and write operations as well as the quantity of read and write operations (IOPS) on a striped pool will scale with the number of vdevs. Note here that I said the speeds and IOPS scale with the number of vdevs rather than the number of drives; there’s a reason for this and we’ll cover it in the next post when we discuss RAID-Z.
Here’s a summary of the total pool performance (where N is the number of disks in the pool):

  • N-wide striped:
  • Read IOPS: N * Read IOPS of a single drive
  • Write IOPS: N * Write IOPS of a single drive
  • Streaming read speed: N * Streaming read speed of a single drive
  • Streaming write speed: N * Streaming write speed of a single drive
  • Storage space efficiency: 100%
  • Fault tolerance: None!

Let’s apply this to our example system, configured with a 12-wide striped pool:

  • 12-wide striped:
  • Read IOPS: 3000
  • Write IOPS: 3000
  • Streaming read speed: 1200 MB/s
  • Streaming write speed: 1200 MB/s
  • Storage space efficiency: 72 TB
  • Fault tolerance: None!
  • Below is a visual depiction of our 12 rainbow blocks written to this pool configuration:

The blocks are simply striped across the 12 disks in the pool. The LBA column on the left stands for “Logical Block Address”. If we treat each disk as a column in an array, each LBA would be a row. It’s also easy to see that if any single disk fails, we would be missing a color in the rainbow and our data would be incomplete. While this configuration has fantastic read and write speeds and can handle a ton of IOPS, the data stored on the pool is very vulnerable. This configuration is not recommended unless you’re comfortable losing all of your pool’s data whenever any single drive fails.
A mirrored vdev consists of two or more disks. A mirrored vdev stores an exact copy of all the data written to it on each one of its drives. Traditional RAID-1 mirrors usually only support two drive mirrors, but ZFS allows for more drives per mirror to increase redundancy and fault tolerance. All disks in a mirrored vdev have to fail for the vdev, and thus the whole pool, to fail. Total storage space will be equal to the size of a single drive in the vdev. If you’re using mismatched drive sizes in your mirrors, the total size will be that of the smallest drive in the mirror.
Streaming read speeds and read IOPS on a mirrored vdev will be faster than write speeds and IOPS. When reading from a mirrored vdev, the drives can “divide and conquer” the operations, similar to what we saw above in the striped pool. This is because each drive in the mirror has an identical copy of the data. For write operations, all of the drives need to write a copy of the data, so the mirrored vdev will be limited to the streaming write speed and IOPS of a single disk.

Here’s a summary:

  • N-way mirror:

  • Read IOPS: N * Read IOPS of a single drive

  • Write IOPS: Write IOPS of a single drive

  • Streaming read speed: N * Streaming read speed of a single drive

  • Streaming write speed: Streaming write speed of a single drive

  • Storage space efficiency: 50% for 2-way, 33% for 3-way, 25% for 4-way, etc. [(N-1)/N]

  • Fault tolerance: 1 disk per vdev for 2-way, 2 for 3-way, 3 for 4-way, etc. [N-1]

  • For our first example configuration, let’s do something ridiculous and create a 12-way mirror. ZFS supports this kind of thing, but your management probably will not.

  • 1x 12-way mirror:

  • Read IOPS: 3000

  • Write IOPS: 250

  • Streaming read speed: 1200 MB/s

  • Streaming write speed: 100 MB/s

  • Storage space efficiency: 8.3% (6 TB)

  • Fault tolerance: 11

As we can clearly see from the diagram, every single disk in the vdev gets a full copy of our rainbow data. The chainlink icons between the disk labels in the column headers indicate the disks are part of a single vdev. We can lose up to 11 disks in this vdev and still have a complete rainbow. Of course, the data takes up far too much room on the pool, occupying a full 12 LBAs in the data array.

Obviously, this is far from the best use of 12 drives. Let’s do something a little more practical and configure the pool with the ZFS equivalent of RAID-10. We’ll configure six 2-way mirror vdevs. ZFS will stripe the data across all 6 of the vdevs. We can use the work we did in the striped vdev section to determine how the pool as a whole will behave. Let’s first calculate the performance per vdev, then we can work on the full pool:

  • 1x 2-way mirror:

  • Read IOPS: 500

  • Write IOPS: 250

  • Streaming read speed: 200 MB/s

  • Streaming write speed: 100 MB/s

  • Storage space efficiency: 50% (6 TB)

  • Fault tolerance: 1

  • Now we can pretend we have 6 drives with the performance statistics listed above and run them through our striped vdev performance calculator to get the total pool’s performance:

  • 6x 2-way mirror:

  • Read IOPS: 3000

  • Write IOPS: 1500

  • Streaming read speed: 3000 MB/s

  • Streaming write speed: 1500 MB/s

  • Storage space efficiency: 50% (36 TB)

  • Fault tolerance: 1 per vdev, 6 total

  • Again, we will examine the configuration from a visual perspective:

Each vdev gets a block of data and ZFS writes that data to all of (or in this case, both of) the disks in the mirror. As long as we have at least one functional disk in each vdev, we can retrieve our rainbow. As before, the chain link icons denote the disks are part of a single vdev. This configuration emphasizes performance over raw capacity but doesn’t totally disregard fault tolerance as our striped pool did. It’s a very popular configuration for systems that need a lot of fast I/O. Let’s look at one more example configuration using four 3-way mirrors. We’ll skip the individual vdev performance calculation and go straight to the full pool:

  • 4x 3-way mirror:
  • Read IOPS: 3000
  • Write IOPS: 1000
  • Streaming read speed: 3000 MB/s
  • Streaming write speed: 400 MB/s
  • Storage space efficiency: 33% (24 TB)
  • Fault tolerance: 2 per vdev, 8 total

While we have sacrificed some write performance and capacity, the pool is now extremely fault tolerant. This configuration is probably not practical for most applications and it would make more sense to use lower fault tolerance and set up an offsite backup system.
Striped and mirrored vdevs are fantastic for access speed performance, but they either leave you with no redundancy whatsoever or impose at least a 50% penalty on the total usable space of your pool. In the next post, we will cover RAIDZ, which lets you keep data redundancy without sacrificing as much storage space efficiency. We’ll also look at some example workload scenarios and decide which layout would be the best fit for each.


###2FA with ssh on OpenBSD

Five years ago I wrote about using a yubikey on OpenBSD. The only problem with doing this is that there’s no validation server available on OpenBSD, so you need to use a different OTP slot for each machine. (You don’t want to risk a replay attack if someone succeeds in capturing an OTP on one machine, right?) Yubikey has two OTP slots per device, so you would need a yubikey for every two machines with which you’d like to use it. You could use a bastion—and use only one yubikey—but I don’t like the SPOF aspect of a bastion. YMMV.
After I played with TOTP, I wanted to use them as a 2FA for ssh. At the time of writing, we can’t do that using only the tools in base. This article focuses on OpenBSD; if you use another operating system, here are two handy links.

  • SEED CONFIGURATION

The first thing we need to do is to install the software which will be used to verify the OTPs we submit.

# pkg_add login_oath

We need to create a secret – aka, the seed – that will be used to calculate the Time-based One-Time Passwords. We should make sure no one can read or change it.

$ openssl rand -hex 20 > ~/.totp-key
$ chmod 400 ~/.totp-key

Now we have a hexadecimal key, but apps usually want a base32 secret. I initially wrote a small script to do the conversion.
While writing this article, I took the opportunity to improve it. When I initially wrote this utility for my use, python-qrcode hadn’t yet been imported to the OpenBSD ports/packages system. It’s easy to install now, so let’s use it.
Here’s the improved version. It will ask for the hex key and output the secret as a base32-encoded string, both with and without spacing so you can copy-paste it into your password manager or easily retype it. It will then ask for the information needed to generate a QR code. Adding our new OTP secret to any mobile app using the QR code will be super easy!

  • SYSTEM CONFIGURATION

We can now move to the configuration of the system to put our new TOTP to use. As you might guess, it’s going to be quite close to what we did with the yubikey.
We need to tweak login.conf. Be careful and keep a root shell open at all times. The few times I broke my OpenBSD were because I messed with login.conf without showing enough care.

  • SSHD CONFIGURATION

Again, keeping a root shell around decreases the risk of losing access to the system and being locked outside.
A good standard is to use PasswordAuthentication no and to use public key only. Except… have a guess what the P stands for in TOTP. Yes, congrats, you guessed it!
We need to switch to PasswordAuthentication yes. However, if we made this change alone, sshd would then accept a public key OR a password (which are TOTP because of our login.conf). 2FA uses both at the same time.
To inform sshd we intend to use both, we need to set AuthenticationMethods publickey,password. This way, the user trying to login will first need to perform the traditional publickey authentication. Once that’s done, ssh will prompt for a password and the user will need to submit a valid TOTP for the system.
We could do this the other way around, but I think bots could try passwords, wasting resources. Evaluated in this order, failing to provide a public key leads to sshd immediately declining your attempt.

  • IMPROVING SECURITY WITHOUT IMPACTING UX

My phone has a long enough password that most of the time, I fail to type it correctly on the first try. Of course, if I had to unlock my phone, launch my TOTP app and use my keyboard to enter what I see on my phone’s screen, I would quickly disable 2FA.
To find a balance, I have whitelisted certain IP addresses and users. If I connect from a particular IP address or as a specific user, I don’t want to go through 2FA. For some users, I might not even enable 2FA.
To sum up, we covered how to create a seed, how to perform a hexadecimal to base32 conversion and how to create a QR code for mobile applications. We configured the login system with login.conf so that ssh authentication uses the TOTP login system, and we told sshd to ask for both the public key and the Time-based One-Time Password. Now you should be all set to use two-factor ssh authentication on OpenBSD!


##News Roundup
###How ZFS maintains file type information in directories

As an aside in yesterday’s history of file type information being available in Unix directories, I mentioned that it was possible for a filesystem to support this even though its Unix didn’t. By supporting it, I mean that the filesystem maintains this information in its on disk format for directories, even though the rest of the kernel will never ask for it. This is what ZFS does.
The easiest way to see that ZFS does this is to use zdb to dump a directory. I’m going to do this on an OmniOS machine, to make it more convincing, and it turns out that this has some interesting results. Since this is OmniOS, we don’t have the convenience of just naming a directory in zdb, so let’s find the root directory of a filesystem, starting from dnode 1 (as seen before).

# zdb -dddd fs3-corestaff-01/h/281 1
Dataset [....]
[...]
microzap: 512 bytes, 4 entries
[...]
ROOT = 3

# zdb -dddd fs3-corestaff-01/h/281 3
Object lvl iblk dblk dsize lsize %full type
3 1 16K 1K 8K 1K 100.00 ZFS directory
[...]
microzap: 1024 bytes, 8 entries

RESTORED = 4396504 (type: Directory)
ckstst = 12017 (type: not specified)
ckstst3 = 25069 (type: Directory)
.demo-file = 5832188 (type: Regular File)
.peergroup = 12590 (type: not specified)
cks = 5 (type: not specified)
cksimap1 = 5247832 (type: Directory)
.diskuse = 12016 (type: not specified)
ckstst2 = 12535 (type: not specified)

This is actually an old filesystem (it dates from Solaris 10 and has been transferred around with ‘zfs send | zfs recv’ since then), but various home directories for real and test users have been created in it over time (you can probably guess which one is the oldest one). Sufficiently old directories and files have no file type information, but more recent ones have this information, including .demo-file, which I made just now so this would have an entry that was a regular file with type information.
Once I dug into it, this turned out to be a change introduced (or activated) in ZFS filesystem version 2, which is described in ‘zfs upgrade -v’ as ‘enhanced directory entries’. As an actual change in (Open)Solaris, it dates from mid 2007, although I’m not sure what Solaris release it made it into. The upshot is that if you made your ZFS filesystem any time in the last decade, you’ll have this file type information in your directories.
How ZFS stores this file type information is interesting and clever, especially when it comes to backwards compatibility. I’ll start by quoting the comment from zfs_znode.h:

/*
* The directory entry has the type (currently unused on
* Solaris) in the top 4 bits, and the object number in
* the low 48 bits. The "middle" 12 bits are unused.
*/

In yesterday’s entry I said that Unix directory entries need to store at least the filename and the inode number of the file. What ZFS is doing here is reusing the 64 bit field used for the ‘inode’ (the ZFS dnode number) to also store the file type, because it knows that object numbers have only a limited range. This also makes old directory entries compatible, by making type 0 (all 4 bits 0) mean ‘not specified’. Since old directory entries only stored the object number and the object number is 48 bits or less, the higher bits are guaranteed to be all zero.
The reason this needed a new ZFS filesystem version is now clear. If you tried to read directory entries with file type information on a version of ZFS that didn’t know about them, the old version would likely see crazy (and non-existent) object numbers and nothing would work. In order to even read a ‘file type in directory entries’ filesystem, you need to know to only look at the low 48 bits of the object number field in directory entries.


###Everything old is new again

Just because KDE4-era software has been deprecated by the KDE-FreeBSD team in the official ports-repository, doesn’t mean we don’t care for it while we still need to. KDE4 was released on January 11th, 2008 — I still have the T-shirt — which was a very different C++ world than what we now live in. Much of the code pre-dates the availability of C11 — certainly the availability of compilers with C11 support. The language has changed a great deal in those ten years since the original release.
The platforms we run KDE code on have, too — FreeBSD 12 is a long way from the FreeBSD 6 or 7 that were current at release (although at the time, I was more into OpenSolaris). In particular, since then the FreeBSD world has switched over to Clang, and FreeBSD current is experimenting with Clang 7. So we’re seeing KDE4-era code being built, and running, on FreeBSD 12 with Clang 7. That’s a platform with a very different idea of what constitutes correct code, than what the code was originally written for. (Not quite as big a difference as Helio’s KDE1 efforts, though)
So, while we’re counting down to removing KDE4 from the FreeBSD ports tree, we’re also going through and fixing it to work with Clang 7, which defaults to a newer C++ standard and which is quite picky about some things. Some time in the distant past, when pointers were integers and NULL was zero, there was some confusion about booleans. So there’s lots of code that does list.contains(element) > 0 … this must have been a trick before booleans were a supported type in all our compilers. In any case it breaks with Clang 7, since contains() returns a QBool which converts to a nullptr (when false) which isn’t comparable to the integer 0. Suffice to say I’ve spent more time reading KDE4-era code this month, than in the past two years.
However, work is proceeding apace, so if you really really want to, you can still get your old-school kicks on a new platform. Because we care about packaging things right, even when we want to get rid of it.


###OpenBSD netcat demystified

Owing to its versatile functionalities, netcat earns the reputation as “TCP/IP Swiss army knife”. For example, you can create a simple chat app using netcat:

  • (1) Open a terminal and input following command:

# nc -l 3003

This means a netcat process will listen on 3003 port in this machine (the IP address of current machine is 192.168.35.176).

  • (2) Connect aforemontioned netcat process in another machine, and send a greeting:

# nc 192.168.35.176 3003
hello

Then in the first machine’s terminal, you will see the “hello” text:

# nc -l 3003
hello

A primitive chatroom is built successfully. Very cool! Isn’t it? I think many people can’t wait to explore more features of netcatnow. If you are among them, congratulations! This tutorial may be the correct place for you.
In the following parts, I will delve into OpenBSD’s netcatcode to give a detailed anatomy of it. The reason of picking OpenBSD’s netcat rather than others’ is because its code repository is small (~2000 lines of code) and neat. Furthermore, I also hope this little book can assist you learn more socket programming knowledge not just grasping usage of netcat.
We’re all set. Let’s go!


##Beastie Bits


##Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Netcat Demystified | BSD Now 268 first appeared on Jupiter Broadcasting.

]]>
Absolute FreeBSD | BSD Now 267 https://original.jupiterbroadcasting.net/127546/absolute-freebsd-bsd-now-267/ Wed, 10 Oct 2018 11:34:45 +0000 https://original.jupiterbroadcasting.net/?p=127546 ##Headlines ##Interview – Michael W. Lucas – mwlucas@michaelwlucas.com / @mwlauthor BR: [Welcome Back] AJ: What have you been doing since last we talked to you [ed, ssh, and af3e] BR: Tell us more about AF3e AJ: How did the first Absolute FreeBSD come about? BR: Do you have anything special planned for MeetBSD? AJ: What […]

The post Absolute FreeBSD | BSD Now 267 first appeared on Jupiter Broadcasting.

]]>

##Headlines
##Interview – Michael W. Lucas – mwlucas@michaelwlucas.com / @mwlauthor

  • BR: [Welcome Back]
  • AJ: What have you been doing since last we talked to you [ed, ssh, and af3e]
  • BR: Tell us more about AF3e
  • AJ: How did the first Absolute FreeBSD come about?
  • BR: Do you have anything special planned for MeetBSD?
  • AJ: What are you working on now? [FM:Jails, Git sync Murder]
  • BR: What are your plans for next year?
  • AJ: How has SEMIBug been going?

Auction at https://mwl.io
Patreon Link:


##Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Absolute FreeBSD | BSD Now 267 first appeared on Jupiter Broadcasting.

]]>