pcbsd – Jupiter Broadcasting https://www.jupiterbroadcasting.com Open Source Entertainment, on Demand. Thu, 13 Apr 2017 10:18:47 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 https://original.jupiterbroadcasting.net/wp-content/uploads/2019/04/cropped-favicon-32x32.png pcbsd – Jupiter Broadcasting https://www.jupiterbroadcasting.com 32 32 Codified Summer | BSD Now 189 https://original.jupiterbroadcasting.net/113836/codified-summer-bsd-now-189/ Thu, 13 Apr 2017 02:18:47 +0000 https://original.jupiterbroadcasting.net/?p=113836 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines Google summer of code for BSDs FreeBSD FreeBSD’s existing list of GSoC Ideas for potential students FreeBSD/Xen: import the grant-table bus_dma(9) handlers from OpenBSD […]

The post Codified Summer | BSD Now 189 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

Google summer of code for BSDs

  • FreeBSD

  • FreeBSD’s existing list of GSoC Ideas for potential students

    • FreeBSD/Xen: import the grant-table bus_dma(9) handlers from OpenBSD
    • Add support for usbdump file-format to wireshark and vusb-analyzer
    • Write a new boot environment manager
    • Basic smoke test of all base utilities
    • Port OpenBSD’s pf testing framework and tests
    • Userspace Address Space Annotation
    • zstandard integration in libstand
    • Replace mergesort implementation
    • Test Kload (kexec for FreeBSD)
    • Kernel fuzzing suite
    • Integrate MFSBSD into the release building tools
    • NVMe controller emulation for bhyve
    • Verification of bhyve’s instruction emulation
    • VGA emulation improvements for bhyve
    • audit framework test suite
    • Add more FreeBSD testing to Xen osstest
    • Lua in bootloader
    • POSIX compliance testing framework
    • coreclr: add Microsoft’s coreclr and corefx to the Ports tree.
  • NetBSD

    • Kernel-level projects
    • Medium
    • ISDN NT support and Asterisk integration
    • LED/LCD Generic API
    • NetBSD/azure — Bringing NetBSD to Microsoft Azure
    • OpenCrypto swcrypto(4) enhancements
    • Scalable entropy gathering
    • Userland PCI drivers
    • Hard
    • Real asynchronous I/O
    • Parallelize page queues
    • Tickless NetBSD with high-resolution timers
    • Userland projects
    • Easy
    • Inetd enhancements — Add new features to inetd
    • Curses library automated testing
    • Medium
    • Make Anita support additional virtual machine systems
    • Create an SQL backend and statistics/query page for ATF test results
    • Light weight precision user level time reading
    • Query optimizer for find(1)
    • Port launchd
    • Secure-PLT – supporting RELRO binaries
    • Sysinst alternative interface
    • Hard
    • Verification tool for NetBSD32
    • pkgsrc projects
    • Easy
    • Version control config files
    • Spawn support in pkgsrc tools
    • Authentication server meta-package
    • Medium
    • pkgin improvements
    • Unify standard installation tasks
    • Hard
    • Add dependency information to binary packages
    • Tool to find dependencies precisely
  • LLVM

    • Fuzzing the Bitcode reader

    Description of the project: The optimizer is 25-30% slower when debug info are enabled, it’d be nice to track all the places where we don’t do a good job about ignoring them!

    • Extend clang AST to provide information for the type as written in template instantiations.

    Description of the project: When instantiating a template, the template arguments are canonicalized before being substituted into the template pattern. Clang does not preserve type sugar when subsequently accessing members of the instantiation. Clang should “re-sugar” the type when performing member access on a class template specialization, based on the type sugar of the accessed specialization.

    • Shell auto-completion support for clang.

    Bash and other shells support typing a partial command and then automatically completing it for the user (or at least providing suggestions how to complete) when pressing the tab key. This is usually only supported for popular programs such as package managers (e.g. pressing tab after typing “apt-get install late” queries the APT package database and lists all packages that start with “late”). As of now clang’s frontend isn’t supported by any common shell.

    • Clang-based C/C++ diff tool.

    Description of the project: Every developer has to interact with diff tools daily. The algorithms are usually based on detecting “longest common subsequences”, which is agnostic to the file type content. A tool that would understand the structure of the code may provide a better diff experience by being robust against, for example, clang-format changes.

    • Find dereference of pointers.

    Description of the project: Find dereference of pointer before checking for nullptr.

    • Warn if virtual calls are made from constructors or destructors.

    Description of the project: Implement a path-sensitive checker that warns if virtual calls are made from constructors and destructors, which is not valid in case of pure virtual calls and could be a sign of user error in non-pure calls.

    • Improve Code Layout

    Description of the project: The goal for the project is trying to improve the layout/performances of the generated executable. The primary object format considered for the project is ELF but this can be extended to other object formats. The project will touch both LLVM and lld.

  • Why Isn’t OpenBSD in Google Summer of Code 2017?

  • Hacker News Discussion Thread


Turtles on the Wire: Understanding How the OS Uses the Modern NIC

  • The Simple NIC
  • MAC Address Filters and Promiscuous Mode
  • Problem: The Single Busy CPU
  • A Swing and a Miss
  • Nine Rings for Packets Doomed to be Hashed
  • Problem: Density, Density, Density
  • A Brief Aside: The Virtual NIC
  • Always Promiscuous?
  • The Classification Challenge
  • Problem: CPUs are too ‘slow’
  • Problem: The Interrupts are Coming in too Hot
  • Solution One: Do Less Work
  • Solution Two: Turn Off Interrupts
  • Recapping
  • Future Directions and More Reading

Make Dragonfly BSD great again!

Recently I spent some time reading Dragonfly BSD code. While doing so I spotted a vulnerability in the sysvsem subsystem that let user to point to any piece of memory and write data through it (including the kernel space). This can be turned into execution of arbitrary code in the kernel context and by exploiting this, we’re gonna make Dragonfly BSD great again!

Dragonfly BSD is a BSD system which originally comes from the FreeBSD project. In 2003 Matthew Dillon forked code from the 4.x branch of the FreeBSD and started a new flavour.
I thought of Dragonfly BSD as just another fork, but during EuroBSDCon 2015 I accidentally saw the talk about graphical stack in the Dragonfly BSD. I confused rooms, but it was too late to escape as I was sitting in the middle of a row, and the exit seemed light years away from me. 🙂 Anyway, this talk was a sign to me that it’s not just a niche of a niche of a niche of a niche operating system. I recommend spending a few minutes of your precious time to check out the HAMMER file system, Dragonfly’s approach to MP, process snapshots and other cool features that it offers. Wikipedia article is a good starter

  • With the exploit, they are able to change the name of the operating system back to FreeBSD, and escalate from an unprivileged user to root.

The Bug itself is located in the semctl(2) system call implementation. bcopy(3) in line 385 copies semid_ds structure to memory pointed by arg->buf, this pointer is fully controlled by the user, as it’s one of the syscall’s arguments. So the bad thing here is that we can copy things to arbitrary address, but we have not idea what we copy yet. This code was introduced by wrongly merging code from the FreeBSD project, bah, bug happens.

  • Using this access, the example code shows how to overwrite the function pointers in the kernel used for the open() syscall, and how to overwrite the ostype global, changing the name of the operating system.
  • In the second example, the reference to the credentials of the user trying to open a file are used to overwrite that data, making the user root.

The bug was fixed in uber fast manner (within few hours!) by Matthew Dillon, version 4.6.1 released shortly after that seems to be safe. In case you care, you know what to do!

  • Thanks to Mateusz Kocielski for the detailed post, and finding the bug

Interview – Wendell – wendell@level1techs.com / @tekwendell

  • Host of Level1Techs website, podcast and YouTube channel

News Roundup

Using yubikeys everywhere

  • Ted Unangst is back, with an interesting post about YUBI Keys

Everybody is getting real excited about yubikeys recently, so I figured I should get excited, too. I have so far resisted two factor authorizing everything, but this seemed like another fun experiment. There’s a lot written about yubikeys and how you should use one, but nothing I’ve read answered a few of the specific questions I had
To begin with, I ordered two yubikeys. One regular sized 4 and one nano. I wanted to play with different form factors to see which is better for various uses, and I wanted to test having a key and a backup key. Everybody always talks about having one yubikey. And then if you lose it, terrible things happen. Can this problem be alleviated with two keys? I’m also very curious what happens when I try to login to a service with my phone after enabling U2F.
We’ve got three computers (and operating systems) in the mix, along with a number of (mostly web) services. Wherever possible, I want to use a yubikey both to login to the computer and to authorize myself to remote services.
I started my adventure on my chromebook. Ultimate goal would be to use the yubikey for local logins. Either as a second factor, or as an alternative factor. First things first and we need to get the yubikey into the account I use to sign into the chromebook. Alas, there is apparently no way to enroll only a security key for a Google account. Every time I tried, it would ask me for my phone number. That is not what I want. Zero stars.
Giving up on protecting the chromebook itself, at least maybe I can use it to enable U2F with some other sites. U2F is currently limited to Chrome, but it sounds like everything I want. Facebook signup using U2F was pretty easy. Go to account settings, security subheading, add the device. Tap the button when it glows. Key added. Note that it’s possible to add a key without actually enabling two factor auth, in which case you can still login with only a password, but no way to login with no password and only a USB key. Logged out to confirm it would check the key, and everything looked good, so I killed all my other active sessions. Now for the phone test. Not quite as smooth. Tried to login, the Facebook app then tells me it has sent me an SMS and to enter the code in the box. But I don’t have a phone number attached. I’m not getting an SMS code.
Meanwhile, on my laptop, I have a new notification about a login attempt. Follow the prompts to confirm it’s me and permit the login. This doesn’t have any effect on the phone, however. I have to tap back, return to the login screen, and enter my password again. This time the login succeeds. So everything works, but there are still some rough patches in the flow. Ideally, the phone would more accurately tell me to visit the desktop site, and then automatically proceed after I approve. (The messenger app crashed after telling me my session had expired, but upon restarting it was able to borrow the Facebook app credentials and I was immediately logged back in.)
Let’s configure Dropbox next. Dropbox won’t let you add a security key to an account until after you’ve already set up some other mobile authenticator. I already had the Duo app on my phone, so I picked that, and after a short QR scan, I’m ready to add the yubikey. So the key works to access Dropbox via Chrome. Accessing Dropbox via my phone or Firefox requires entering a six digit code. No way to use a yubikey in a three legged configuration
I don’t use Github, but I know they support two factors, so let’s try them next. Very similar to Dropbox. In order to set up a key, I must first set up an authenticator app. This time I went with Yubico’s own desktop authenticator. Instead of scanning the QR code, type in some giant number (on my Windows laptop), and it spits out an endless series of six digit numbers, but only while the yubikey is inserted. I guess this is kind of what I want, although a three pound yubikey is kind of unwieldy.
As part of my experiment, I noticed that Dropbox verifies passwords before even looking at the second auth. I have a feeling that they should be checked at the same time. No sense allowing my password guessing attack to proceed while I plot how to steal someone’s yubikey. In a sense, the yubikey should serve as a salt, preventing me from mounting such an attack until I have it, thus creating a race where the victim notices the key is gone and revokes access before I learn the password. If I know the password, the instant I grab the key I get access. Along similar lines, I was able to complete a password reset without entering any kind of secondary code.
Having my phone turn into a second factor is a big part of what I’m looking to avoid with the yubikey. I’d like to be able to take my phone with me, logged into some sites but not all, and unable to login to the rest. All these sites that require using my phone as mobile authenticator are making that difficult. I bought the yubikey because it was cheaper than buying another phone! Using the Yubico desktop authenticator seems the best way around that.

  • The article also provides instructions for configuring the Yubikey on OpenBSD

A few notes about OTP. As mentioned, the secret key is the real password. It’s stored on whatever laptop or server you login to. Meaning any of those machines can take the key and use it to login to any other machine. If you use the same yubikey to login to both your laptop and a remote server, your stolen laptop can trivially be used to login to the server without the key. Be mindful of that when setting up multiple machines. Also, the OTP counter isn’t synced between machines in this setup, which allows limited replay attacks.

  • Ted didn’t switch his SSH keys to the Yubikey, because it doesn’t support ED25519, and he just finished rotating all of his keys and doesn’t want to do it again.

I did most of my experimenting with the larger yubikey, since it was easier to move between machines. For operations involving logging into a web site, however, I’d prefer the nano. It’s very small, even smaller than the tiniest wireless mouse transcievers I’ve seen. So small, in fact, I had trouble removing it because I couldn’t find anything small enough to fit through the tiny loop. But probably a good thing. Most other micro USB gadgets stick out just enough to snag when pushing a laptop into a bag. Not the nano. You lose a port, but there’s really no reason to ever take it out. Just leave it in, and then tap it whenever you login to the tubes. It would not be a good choice for authenticating to the local machine, however. The larger device, sized to fit on a keychain, is much better for that.
It is possible to use two keys as backups. Facebook and Dropbox allow adding two U2F keys. This is perhaps a little tiresome if there’s lots of sites, as I see no way to clone a key. You have to login to every service. For challenge response and OTP, however, the personalization tool makes it easy to generate lots of yubikeys with the same secrets. On the other hand, a single device supports an infinite number of U2F sites. The programmable interfaces like OTP are limited to only two slots, and the first is already used by the factory OTP setup.


What happened to my vlan

A long term goal of the effort I’m driving to unlock OpenBSD’s Network Stack is obviously to increase performances. So I’d understand that you find confusing when some of our changes introduce performance regressions.
It is just really hard to do incremental changes without introducing temporary regressions. But as much as security is a process, improving performance is also a process. Recently markus@ told me that vlan(4) performances dropped in last releases. He had some ideas why but he couldn’t provide evidences. So what really happened?
Hrvoje Popovski was kind enough to help me with some tests. He first confirmed that on his Xeon box (E5-2643 v2 @ 3.50GHz), forwarding performances without pf(4) dropped from 1.42Mpps to 880Kpps when using vlan(4) on both interfaces.
Together vlan_input() and vlan_start() represent 25% of the time CPU1 spends processing packets. This is not exactly between 33% and 50% but it is close enough. The assumption we made earlier is certainly too simple. If we compare the amount of work done in process context, represented by if_input_process() we clearly see that half of the CPU time is not spent in ether_input().
I’m not sure how this is related to the measured performance drop. It is actually hard to tell since packets are currently being processed in 3 different contexts. One of the arguments mikeb@ raised when we discussed moving everything in a single context, is that it is simpler to analyse and hopefully make it scale.
With some measurements, a couple of nice pictures, a bit of analysis and some educated guesses we are now in measure of saying that the performances impact observed with vlan(4) is certainly due to the pseudo-driver itself. A decrease of 30% to 50% is not what I would expect from such pseudo-driver.
I originally heard that the reason for this regression was the use of SRP but by looking at the profiling data it seems to me that the queuing API is the problem. In the graph above the CPU time spent in if_input() and if_enqueue() from vlan(4) is impressive. Remember, in the case of vlan(4) these operations are done per packet!
When if_input() has been introduced the queuing API did not exist and putting/taking a single packet on/from an interface queue was cheap. Now it requires a mutex per operation, which in the case of packets received and sent on vlan(4) means grabbing three mutexes per packets.
I still can’t say if my analysis is correct or not, but at least it could explain the decrease observed by Hrvoje when testing multiple vlan(4) configurations. vlan_input() takes one mutex per packet, so it decreases the number of forwarded packets by ~100Kpps on this machine, while vlan_start() taking two mutexes decreases it by ~200Kpps.

  • An interesting analysis of the routing performance regression on OpenBSD
  • I have asked Olivier Cochard-Labbe about doing a similar comparison of routing performance on FreeBSD when a vlan pseudo interface is added to the forwarding path

NetBSD: the first BSD introducing a modern process plugin framework in LLDB

  • Clean up in ptrace(2) ATF tests

We have created some maintanance burden for the current ptrace(2) regression tests. The main issues with them is code duplication and the splitting between generic (Machine Independent) and port-specific (Machine Dependent) test files. I’ve eliminated some of the ballast and merged tests into the appropriate directory tests/lib/libc/sys/. The old location (tests/kernel) was a violation of the tests/README recommendation

  • PTRACE_FORK on !x86 ports

Along with the motivation from Martin Husemann we have investigated the issue with PTRACE_FORK ATF regression tests. It was discovered that these tests aren’t functional on evbarm, alpha, shark, sparc and sparc64 and likely on other non-x86 ports. We have discovered that there is a missing SIGTRAP emitted from the child, during the fork(2) handshake. The proper order of operations is as follows:

parent emits SIGTRAP with si_code=TRAP_CHLD and pe_set_event=pid of forkee
child emits SIGTRAP with si_code=TRAP_CHLD and pe_set_event=pid of forker

Only the x86 ports were emitting the second SIGTRAP signal.

  • PT_SYSCALL and PT_SYSCALLEMU

With the addition of PT_SYSCALLEMU we can implement a virtual kernel syscall monitor. It means that we can fake syscalls within a debugger. In order to achieve this feature, we need to use the PT_SYSCALL operation, catch SIGTRAP with si_code=TRAP_SCE (syscall entry), call PT_SYSCALLEMU and perform an emulated userspace syscall that would have been done by the kernel, followed by calling another PT_SYSCALL with si_code=TRAP_SCX.

  • What has been done in LLDB

A lot of work has been done with the goal to get breakpoints functional. This target penetrated bugs in the existing local patches and unveiled missing features required to be added. My initial test was tracing a dummy hello-world application in C. I have sniffed the GDB Remote Protocol packets and compared them between Linux and NetBSD. This helped to streamline both versions and bring the NetBSD support to the required Linux level.

  • Plan for the next milestone

I’ve listed the following goals for the next milestone.

  • watchpoints support
  • floating point registers support
  • enhance core(5) and make it work for multiple threads
  • introduce PT_SETSTEP and PT_CLEARSTEP in ptrace(2)
  • support threads in the NetBSD Process Plugin
  • research F_GETPATH in fcntl(2)
  • Beyond the next milestone is x86 32-bit support.

LibreSSL 2.5.2 released

  • Added the recallocarray(3) memory allocation function, and converted various places in the library to use it, such as CBB and BUF_MEM_grow. recallocarray(3) is similar to reallocarray. Newly allocated memory is cleared similar to calloc(3). Memory that becomes unallocated while shrinking or moving existing allocations is explicitly discarded by unmapping or clearing to 0.
  • Added new root CAs from SECOM Trust Systems / Security Communication of Japan.
  • Added EVP interface for MD5+SHA1 hashes.
  • Fixed DTLS client failures when the server sends a certificate request.
  • Correct handling of padding when upgrading an SSLv2 challenge into an SSLv3/TLS connection.
  • Allow protocols and ciphers to be set on a TLS config object in libtls.
  • Improved nc(1) TLS handshake CPU usage and server-side error reporting.

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Codified Summer | BSD Now 189 first appeared on Jupiter Broadcasting.

]]>
And then the murders began | BSD Now 188 https://original.jupiterbroadcasting.net/113621/and-then-the-murders-began-bsd-now-188/ Thu, 06 Apr 2017 02:29:42 +0000 https://original.jupiterbroadcasting.net/?p=113621 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines DragonFly BSD 4.8 is released Improved kernel performance This release further localizes cache lines and reduces/removes cache ping-ponging on globals. For bulk builds on […]

The post And then the murders began | BSD Now 188 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

DragonFly BSD 4.8 is released

  • Improved kernel performance
    • This release further localizes cache lines and reduces/removes cache ping-ponging on globals. For bulk builds on many-cores or multi-socket systems, we have around a 5% improvement, and certain subsystems such as namecache lookups and exec()s see massive focused improvements. See the corresponding mailing list post with details.
  • Support for eMMC booting, and mobile and high-performance PCIe SSDs
    • This kernel release includes support for eMMC storage as the boot device. We also sport a brand new SMP-friendly, high-performance NVMe SSD driver (PCIe SSD storage). Initial device test results are available.
  • EFI support
    • The installer can now create an EFI or legacy installation. Numerous adjustments have been made to userland utilities and the kernel to support EFI as a mainstream boot environment. The /boot filesystem may now be placed either in its own GPT slice, or in a DragonFly disklabel inside a GPT slice.
    • DragonFly, by default, creates a GPT slice for all of DragonFly and places a DragonFly disklabel inside it with all the standard DFly partitions, such that the disk names are roughly the same as they would be in a legacy system.
  • Improved graphics support
    • The i915 driver has been updated to match the version found with the Linux 4.6 kernel. Broadwell and Skylake processor users will see improvements.
  • Other user-affecting changes
    • Kernel is now built using -O2.
    • VKernels now use COW, so multiple vkernels can share one disk image.
    • powerd() is now sensitive to time and temperature changes.
    • Non-boot-filesystem kernel modules can be loaded in rc.conf instead of loader.conf.

#8005 poor performance of 1MB writes on certain RAID-Z configurations

  • Matt Ahrens posts a new patch for OpenZFS

Background: RAID-Z requires that space be allocated in multiples of P+1 sectors,
because this is the minimum size block that can have the required amount of parity. Thus blocks on RAIDZ1 must be allocated in a multiple of 2 sectors; on RAIDZ2 multiple of 3; and on RAIDZ3 multiple of 4. A sector is a unit of 2^ashift bytes, typically 512B or 4KB.
To satisfy this constraint, the allocation size is rounded up to the proper multiple, resulting in up to 3 “pad sectors” at the end of some blocks. The contents of these pad sectors are not used, so we do not need to read or write these sectors. However, some storage hardware performs much worse (around 1/2 as fast) on mostly-contiguous writes when there are small gaps of non-overwritten data between the writes. Therefore, ZFS creates “optional” zio’s when writing RAID-Z blocks that include pad sectors. If writing a pad sector will fill the gap between two (required) writes, we will issue the optional zio, thus doubling performance. The gap-filling performance improvement was introduced in July 2009.
Writing the optional zio is done by the io aggregation code in vdev_queue.c. The problem is that it is also subject to the limit on the size of aggregate writes, zfs_vdev_aggregation_limit, which is by default 128KB. For a given block, if the amount of data plus padding written to a leaf device exceeds zfs_vdev_aggregation_limit, the optional zio will not be written, resulting in a ~2x performance degradation.
The solution is to aggregate optional zio’s regardless of the aggregation size limit.

  • As you can see from the graphs, this can make a large difference in performance.
  • I encourage you to read the entire commit message, it is well written and very detailed.

Can you spot the OpenSSL vulnerability

This code was introduced in OpenSSL 1.1.0d, which was released a couple of days ago. This is in the server SSL code, ssl/statem/statem_srvr.c, ssl_bytes_to_cipher_list()), and can easily be reached remotely. Can you spot the vulnerability?
So there is a loop, and within that loop we have an ‘if’ statement, that tests a number of conditions. If any of those conditions fail, OPENSSL_free(raw) is called. But raw isn’t the address that was allocated; raw is increment every loop. Hence, there is a remote invalid free vulnerability.
But not quite. None of those checks in the ‘if’ statement can actually fail; earlier on in the function, there is a check that verifies that the packet contains at least 1 byte, so PACKET_get_1 cannot fail. Furthermore, earlier in the function it is verified that the packet length is a multiple of 3, hence PACKET_copy_bytes and PACKET_forward cannot fail.

  • So, does the code do what the original author thought, or expected it to do?
  • But what about the next person that modifies that code, maybe changing or removing one of the earlier checks, allowing one of those if conditions to fail, and execute the bad code?

Nonetheless OpenSSL has acknowledged that the OPENSSL_free line needs a rewrite: Pull Request #2312
PS I’m not posting this to ridicule the OpenSSL project or their programming skills. I just like reading code and finding corner cases that impact security, which is an effort that ultimately works in everybody’s best interest, and I like to share what I find. Programming is a very difficult enterprise and everybody makes mistakes.

  • Thanks to Guido Vranken for the sharp eye and the blog post

Research Debt

  • I found this article interesting as it relates to not just research, but a lot of technical areas in general

Achieving a research-level understanding of most topics is like climbing a mountain. Aspiring researchers must struggle to understand vast bodies of work that came before them, to learn techniques, and to gain intuition. Upon reaching the top, the new researcher begins doing novel work, throwing new stones onto the top of the mountain and making it a little taller for whoever comes next.
People expect the climb to be hard. It reflects the tremendous progress and cumulative effort that’s gone into the research. The climb is seen as an intellectual pilgrimage, the labor a rite of passage. But the climb could be massively easier. It’s entirely possible to build paths and staircases into these mountains. The climb isn’t something to be proud of. The climb isn’t progress: the climb is a mountain of debt.
Programmers talk about technical debt: there are ways to write software that are faster in the short run but problematic in the long run.

Poor Exposition – Often, there is no good explanation of important ideas and one has to struggle to understand them. This problem is so pervasive that we take it for granted and don’t appreciate how much better things could be.

Undigested Ideas – Most ideas start off rough and hard to understand. They become radically easier as we polish them, developing the right analogies, language, and ways of thinking.

Bad abstractions and notation – Abstractions and notation are the user interface of research, shaping how we think and communicate. Unfortunately, we often get stuck with the first formalisms to develop even when they’re bad. For example, an object with extra electrons is negative, and pi is wrong

Noise – Being a researcher is like standing in the middle of a construction site. Countless papers scream for your attention and there’s no easy way to filter or summarize them. We think noise is the main way experts experience research debt.

There’s a tradeoff between the energy put into explaining an idea, and the energy needed to understand it. On one extreme, the explainer can painstakingly craft a beautiful explanation, leading their audience to understanding without even realizing it could have been difficult. On the other extreme, the explainer can do the absolute minimum and abandon their audience to struggle. This energy is called interpretive labor
Research distillation is the opposite of research debt. It can be incredibly satisfying, combining deep scientific understanding, empathy, and design to do justice to our research and lay bare beautiful insights. Distillation is also hard. It’s tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery.
+ The distillation can often times require an entirely different set of skills than the original creation of the idea. Almost all of the BSD projects have some great ideas or subsystems that just need distillation into easy to understand and use platforms or tools.
Like the theoretician, the experimentalist or the research engineer, the research distiller is an integral role for a healthy research community. Right now, almost no one is filling it.

  • Anyway, if that bit piqued your interest, go read the full article and the suggested further reading.

News Roundup

And then the murders began.

A whole bunch of people have pointed me at articles like this one, which claim that you can improve almost any book by making the second sentence “And then the murders began.”
It’s entirely possible they’re correct. But let’s check, with a sampling of books. As different books come in different tenses and have different voices, I’ve made some minor changes.

“Welcome to Cisco Routers for the Desperate! And then the murders begin.” — Cisco Routers for the Desperate, 2nd ed

“Over the last ten years, OpenSSH has become the standard tool for remote management of Unix-like systems and many network devices. And then the murders began.” — SSH Mastery

“The Z File System, or ZFS, is a complicated beast, but it is also the most powerful tool in a sysadmin’s Batman-esque utility belt. And then the murders begin.” — FreeBSD Mastery: Advanced ZFS

“Blood shall rain from the sky, and great shall be the lamentation of the Linux fans. And then, the murders will begin.” — Absolute FreeBSD, 3rd Ed


Netdata now supports FreeBSD

netdata is a system for distributed real-time performance and health monitoring. It provides unparalleled insights, in real-time, of everything happening on the system it runs (including applications such as web and database servers), using modern interactive web dashboards.

  • From the release notes:

apps.plugin ported for FreeBSD


Distrowatch Weekly reviews RaspBSD

RaspBSD is a FreeBSD-based project which strives to create a custom build of FreeBSD for single board and hobbyist computers. RaspBSD takes a recent snapshot of FreeBSD and adds on additional components, such as the LXDE desktop and a few graphical applications. The RaspBSD project currently has live images for Raspberry Pi devices, the Banana Pi, Pine64 and BeagleBone Black & Green computers.

The default RaspBSD system is quite minimal, running a mere 16 processes when I was logged in. In the background the operating system runs cron, OpenSSH, syslog and the powerd power management service. Other than the user’s shell and terminals, nothing else is running. This means RaspBSD uses little memory, requiring just 16MB of active memory and 31MB of wired or kernel memory.

I made note of a few practical differences between running RaspBSD on the Pi verses my usual Raspbian operating system. One minor difference is RaspBSD turns off the Pi’s external power light after booting. Raspbian leaves the light on. This means it looks like the Pi is off when it is running RaspBSD, but it also saves a little electricity.

Conclusions: Apart from these little differences, running RaspBSD on the Pi was a very similar experience to running Raspbian and my time with the operating system was pleasantly trouble-free. Long-term, I think applying source updates to the base system might be tedious and SD disk operations were slow. However, the Pi usually is not utilized for its speed, but rather its low cost and low-energy usage. For people who are looking for a small home server or very minimal desktop box, RaspBSD running on the Pi should be suitable.


Research UNIX V8, V9 and V10 made public by Alcatel-Lucent

  • Alcatel-Lucent USA Inc. (“ALU-USA”), on behalf of itself and Nokia Bell Laboratories agrees, to the extent of its ability to do so, that it will not assert its copyright rights with respect to any non-commercial copying, distribution, performance, display or creation of derivative works of Research Unix®1 Editions 8, 9, and 10.
  • Research Unix is a term used to refer to versions of the Unix operating system for DEC PDP-7, PDP-11, VAX and Interdata 7/32 and 8/32 computers, developed in the Bell Labs Computing Science Research Center. The version breakdown can be viewed on its Wikipedia page
  • It only took 30+ years, but now they’re public
  • You can grab them from here
  • If you’re wondering what happened with Research Unix, After Version 10, Unix development at Bell Labs was stopped in favor of a successor system, Plan 9; which itself was succeeded by Inferno.

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post And then the murders began | BSD Now 188 first appeared on Jupiter Broadcasting.

]]>
Catching up to BSD | BSD Now 187 https://original.jupiterbroadcasting.net/113371/catching-up-to-bsd-bsd-now-187/ Thu, 30 Mar 2017 00:42:13 +0000 https://original.jupiterbroadcasting.net/?p=113371 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines NetBSD 7.1 released This update represents a selected subset of fixes deemed important for security or stability reasons, as well as new features and […]

The post Catching up to BSD | BSD Now 187 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

NetBSD 7.1 released

  • This update represents a selected subset of fixes deemed important for security or stability reasons, as well as new features and enhancements.
  • Kernel

    • compat_linux(8): Fully support sched_setaffinity and sched_getaffinity, fixing, e.g., the Intel Math Kernel Library.
  • DTrace:

    • Avoid redefined symbol errors when loading the module.
    • Fix module autoload.

+IPFilter:
+ Fix matching of ICMP queries when NAT’d through IPF.
+ Fix lookup of original destination address when using a redirect rule. This is required for transparent proxying by squid, for example.
+ ipsec(4): Fix NAT-T issue with NetBSD being the host behind NAT.

  • Drivers

    • Add vioscsi driver for the Google Compute Engine disk.
    • ichsmb(4): Add support for Braswell CPU and Intel 100 Series.
    • wm(4):
    • Add C2000 KX and 2.5G support.
    • Add Wake On Lan support.
    • Fixed a lot of bugs
  • Security Fixes

  • ARM related

    • Support for Raspberry Pi Zero.
    • ODROID-C1 Ethernet now works.

Summary of the preliminary LLDB support project

  • What has been done in NetBSD

    • Verified the full matrix of combinations of wait(2) and ptrace(2) in the following
    • GNU libstdc++ std::call_once bug investigation test-cases
    • Improving documentation and other minor system parts
    • Documentation of ptrace(2) and explanation how debuggers work
    • Introduction of new siginfo(2) codes for SIGTRAP
    • New ptrace(2) interfaces
  • What has been done in LLDB

  • Native Process NetBSD Plugin
  • The MonitorCallback function
  • Other LLDB code, out of the NativeProcessNetBSD Plugin
  • Automated LLDB Test Results Summary

  • Plan for the next milestone

    • fix conflict with system-wide py-six
    • add support for auxv read operation
    • switch resolution of pid -> path to executable from /proc to sysctl(7)
    • recognize Real-Time Signals (SIGRTMIN-SIGRTMAX)
    • upstream !NetBSDProcessPlugin code
    • switch std::call_once to llvm::call_once
    • add new ptrace(2) interface to lock and unlock threads from execution
    • switch the current PT_WATCHPOINT interface to PT_GETDBREGS and PT_SETDBREGS

Actually building a FreeBSD Phone

  • There have been a number of different projects that have proposed building a FreeBSD based smart phone
  • This project is a bit different, and I think that gives it a better chance to make progress
  • It uses off-the-shelf parts, so while not as neatly integrated as a regular smartphone device, it makes a much better prototype, and is more readily available.
  • Hardware overview: X86-based, long-lasting (user-replaceable) battery, WWAN Modem (w/LTE), 4-5″ LCD Touchscreen (Preferably w/720p resolution, IPS), upgradable storage.
  • Currently targeting the UDOO Ultra platform. It features Intel Pentium N3710 (2.56GHz Quad-core, HD Graphics 405 [16 EUs @ 700MHz], VT-x, AES-NI), 2x4GB DDR3L RAM, 32GB eMMC storage built-in, further expansion w/M.2 SSD & MicroSD slot, lots of connectivity onboard.
  • Software: FreeBSD Hypervisor (bhyve or Xen) to run atop the hardware, hosting two separate hosts.
    • One will run an instance of pfSense, the “World’s Most Popular Open Source Firewall” to handle the WWAN connection, routing, and Firewall (as well as Secure VPN if desired).
    • The other instance will run a slimmed down installation of FreeBSD. The UI will be tweaked to work best in this form factor & resources tuned for this platform. There will be a strong reliance on Google Chromium & Google’s services (like Google Voice).
  • The project has a detailed log, and it looks like the hardware it is based on will ship in the next few weeks, so we expect to see more activity.

News Roundup

NVME M.2 card road tests (Matt Dillon)

  • DragonFlyBSD’s Matt Dillon has posted a rundown of the various M.2 NVMe devices he has tested
    • SAMSUNG 951
    • SAMSUNG 960 EVO
    • TOSHIBA OCZ RD400
    • INTEL 600P
    • WD BLACK 256G
    • MYDIGITALSSD
    • PLEXTOR M8Pe
  • It is interesting to see the relative performance of each device, but also how they handle the workload and manage their temperature (or don’t in a few cases)
  • The link provides a lot of detail about different block sizes and overall performance

ZREP ZFS replication and failover

  • “zrep”, a robust yet easy to use ZFS based replication and failover solution. It can also serve as the conduit to create a simple backup hub.
  • The tool was originally written for Solaris, and is written in ksh
  • However, it seems people have used it on FreeBSD and even FreeNAS by installing the ksh93 port
  • Has anyone used this? How does it compare to tools like zxfer?
  • There is a FreeBSD port, but it is a few versions behind, someone should update it
  • We would be interested in hearing some feedback

Catching up on some TrueOS News


Catching up on some OpenBSD News


Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Catching up to BSD | BSD Now 187 first appeared on Jupiter Broadcasting.

]]>
Fast & the Firewall: Tokyo Drift | BSD Now 186 https://original.jupiterbroadcasting.net/107716/fast-the-firewall-tokyo-drift-bsd-now-186/ Thu, 23 Mar 2017 06:37:47 +0000 https://original.jupiterbroadcasting.net/?p=107716 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines AsiaBSDcon Reports and Reviews AsiaBSDcon schedule Schedule and slides from the 4th bhyvecon Michael Dexter’s trip report on the iXsystems blog NetBSD AsiaBSDcon booth […]

The post Fast & the Firewall: Tokyo Drift | BSD Now 186 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

AsiaBSDcon Reports and Reviews


TrueOS Community Guidelines are here!

  • TrueOS has published its new Community Guidelines

The TrueOS Project has existed for over ten years. Until now, there was no formally defined process for interested individuals in the TrueOS community to earn contributor status as an active committer to this long-standing project. The current core TrueOS developers (Kris Moore, Ken Moore, and Joe Maloney) want to provide the community more opportunities to directly impact the TrueOS Project, and wish to formalize the process for interested people to gain full commit access to the TrueOS repositories.

  • These describe what is expected of community members and committers
  • They also describe the process of getting commit access to the TrueOS repo:

Previously, Kris directly handed out commit bits. Now, the Core developers have provided a small list of requirements for gaining a TrueOS commit bit:
Create five or more pull requests in a TrueOS Project repository within a single six month period.
Stay active in the TrueOS community through at least one of the available community channels (Gitter, Discourse, IRC, etc.).
Request commit access from the core developers via core@trueos.org OR
Core developers contact you concerning commit access.

Pull requests can be any contribution to the project, from minor documentation tweaks to creating full utilities.

At the end of every month, the core developers review the commit logs, removing elements that break the Project or deviate too far from its intended purpose. Additionally, outstanding pull requests with no active dissension are immediately merged, if possible. For example, a user submits a pull request which adds a little-used OpenRC script. No one from the community comments on the request or otherwise argues against its inclusion, resulting in an automatic merge at the end of the month. In this manner, solid contributions are routinely added to the project and never left in a state of “limbo”.

  • The page also describes the perks of being a TrueOS committer:

Contributors to the TrueOS Project enjoy a number of benefits, including:
A personal TrueOS email alias: @trueos.org
Full access for managing TrueOS issues on GitHub.
Regular meetings with the core developers and other contributors.
Access to private chat channels with the core developers.
Recognition as part of an online Who’s Who of TrueOS developers.
The eternal gratitude of the core developers of TrueOS.
A warm, fuzzy feeling.


Intel Donates 250.000 $ to the FreeBSD Foundation

Intel will be more actively engaging with the FreeBSD Foundation and the FreeBSD Project to deliver more timely support for Intel products and technologies in FreeBSD.
Intel has contributed code to FreeBSD for individual device drivers (i.e. NICs) in the past, but is now seeking a more holistic “systems thinking” approach.

We will work closely with the FreeBSD Foundation to ensure the drivers, tools, and applications needed on Intel® SSD-based storage appliances are available to the community. This collaboration will also provide timely support for future Intel® 3D XPoint™ products.

  • Thank you very much, Intel!

Applied FreeBSD: Basic iSCSI

iSCSI is often touted as a low-cost replacement for fibre-channel (FC) Storage Area Networks (SANs). Instead of having to setup a separate fibre-channel network for the SAN, or invest in the infrastructure to run Fibre-Channel over Ethernet (FCoE), iSCSI runs on top of standard TCP/IP. This means that the same network equipment used for routing user data on a network could be utilized for the storage as well.

This article will cover a very basic setup where a FreeBSD server is configured as an iSCSI Target, and another FreeBSD server is configured as the iSCSI Initiator. The iSCSI Target will export a single disk drive, and the initiator will create a filesystem on this disk and mount it locally. Advanced topics, such as multipath, ZFS storage pools, failover controllers, etc. are not covered.

The real magic is the /etc/ctl.conf file, which contains all of the information necessary for ctld to share disk drives on the network. Check out the man page for /etc/ctl.conf for more details; below is the configuration file that I created for this test setup. Note that on a system that has never had iSCSI configured, there will be no existing configuration file, so go ahead and create it.

  • Then, enable ctld and start it:
    • sysrc ctld_enable=”YES”
    • service ctld start
  • You can use the ctladm command to see what is going on:

root@bsdtarget:/dev # ctladm lunlist
(7:0:0/0): Fixed Direct Access SPC-4 SCSI device
(7:0:1/1): Fixed Direct Access SPC-4 SCSI device
root@bsdtarget:/dev # ctladm devlist
LUN Backend Size (Blocks) BS Serial Number Device ID
0 block 10485760 512 MYSERIAL 0 MYDEVID 0
1 block 10485760 512 MYSERIAL 1 MYDEVID 1

  • Now, let’s configure the client side.
  • In order for a FreeBSD host to become an iSCSI Initiator, the iscsd daemon needs to be started:
    • sysrc iscsid_enable=”YES”
    • service iscsid start

Next, the iSCSI Initiator can manually connect to the iSCSI target using the iscsictl tool. While setting up a new iSCSI session, this is probably the best option. Once you are sure the configuration is correct, add the configuration to the /etc/iscsi.conf file (see man page for this file). For iscsictl, pass the IP address of the target as well as the iSCSI IQN for the session:

  • iscsictl -A -p 192.168.22.128 -t iqn.2017-02.lab.testing:basictarget

    • You should now have a new device (check dmesg), in this case, da1
    • The guide them walks through partitioning the disk, and laying down a UFS file system, and mounting it
    • This it walks through how to disconnect iscsi, incase you don’t want it anymore
    • This all looked nice and easy, and it works very well. Now lets see what happens when you try to mount the iSCSI from Windows
    • Ok, that wasn’t so bad.
    • Now, instead of sharing an entire space disk on the host via iSCSI, share a zvol. Now your windows machine can be backed by ZFS. All of your problems are solved.

Interview – Philipp Buehler – pbuehler@sysfive.com

  • Technical Lead at SysFive, and Former OpenBSD Committer

News Roundup

Half a dozen new features in mandoc -T html

  • mandoc’s HTML output mode got some new features

Even though mdoc(7) is a semantic markup language, traditionally none of the semantic annotations were communicated to the reader. […] Now, at least in -T html output mode, you can see the semantic function of marked-up words by hovering your mouse over them.

In terminal output modes, we have the ctags(1)-like internal search facility built around the less(1) tag jump (:t) feature for quite some time now. We now have a similar feature in -T html output mode. To jump to (almost) the same places in the text, go to the address bar of the browser, type a hash mark (‘#’) after the URI, then the name of the option, command, variable, error code etc. you want to jump to, and hit enter.

  • Check out the full report by Ingo Schwarze (schwarze@) and try out these new features

Optimizing IllumOS Kernel Crypto

  • Sašo Kiselkov, of ZFS fame, looked into the performance of the OpenSolaris kernel crypto framework and found it lacking.
  • The article also spends a few minutes on the different modes and how they work.

Recently I’ve had some motivation to look into the KCF on Illumos and discovered that, unbeknownst to me, we already had an AES-NI implementation that was automatically enabled when running on Intel and AMD CPUs with AES-NI support. This work was done back in 2010 by Dan Anderson. This was great news, so I set out to test the performance in Illumos in a VM on my Mac with a Core i5 3210M (2.5GHz normal, 3.1GHz turbo).

  • The initial tests of “what the hardware can do” were done in OpenSSL

So now comes the test for the KCF. I wrote a quick’n’dirty crypto test module that just performed a bunch of encryption operations and timed the results.

  • KCF got around 100 MB/s for each algorithm, except half that for AES-GCM
  • OpenSSL had done over 3000 MB/s for CTR mode, 500 MB/s for CBC, and 1000 MB/s for GCM

What the hell is that?! This is just plain unacceptable. Obviously we must have hit some nasty performance snag somewhere, because this is comical. And sure enough, we did.
When looking around in the AES-NI implementation I came across this bit in aes_intel.s that performed the CLTS instruction.
This is a problem: 3.1.2 Instructions That Cause VM Exits Conditionally
CLTS. The CLTS instruction causes a VM exit if the bits in position 3 (corresponding to CR0.TS) are set in both the CR0 guest/host mask and the CR0 read shadow.
The CLTS instruction signals to the CPU that we’re about to use FPU registers (which is needed for AES-NI), which in VMware causes an exit into the hypervisor. And we’ve been doing it for every single AES block! Needless to say, performing the equivalent of a very expensive context switch every 16 bytes is going to hurt encryption performance a bit. The reason why the kernel is issuing CLTS is because for performance reasons, the kernel doesn’t save and restore FPU register state on kernel thread context switches. So whenever we need to use FPU registers inside the kernel, we must disable kernel thread preemption via a call to kpreempt_disable() and kpreempt_enable() and save and restore FPU register state manually. During this time, we cannot be descheduled (because if we were, some other thread might clobber our FPU registers), so if a thread does this for too long, it can lead to unexpected latency bubbles.

The solution was to restructure the AES and KCF block crypto implementations in such a way that we execute encryption in meaningfully small chunks. I opted for 32k bytes, for reasons which I’ll explain below. Unfortunately, doing this restructuring work was a bit more complicated than one would imagine, since in the KCF the implementation of the AES encryption algorithm and the block cipher modes is separated into two separate modules that interact through an internal API, which wasn’t really conducive to high performance (we’ll get to that later). Anyway, having fixed the issue here and running the code at near native speed, this is what I get:
AES-128/CTR: 439 MB/s
AES-128/CBC: 483 MB/s
AES-128/GCM: 252 MB/s

Not disastrous anymore, but still, very, very bad. Of course, you’ve got keep in mind, the thing we’re comparing it to, OpenSSL, is no slouch. It’s got hand-written highly optimized inline assembly implementations of most of these encryption functions and their specific modes, for lots of platforms. That’s a ton of code to maintain and optimize, but I’ll be damned if I let this kind of performance gap persist.

Fixing this, however, is not so trivial anymore. It pertains to how the KCF’s block cipher mode API interacts with the cipher algorithms. It is beautifully designed and implemented in a fashion that creates minimum code duplication, but this also means that it’s inherently inefficient.

ECB, CBC and CTR gained the ability to pass an algorithm-specific “fastpath” implementation of the block cipher mode, because these functions benefit greatly from pipelining multiple cipher calls into a single place.

ECB, CTR and CBC decryption benefit enormously from being able to exploit the wide XMM register file on Intel to perform encryption/decryption operations on 8 blocks at the same time in a non-interlocking manner. The performance gains here are on the order of 5-8x.
CBC encryption benefits from not having to copy the previously encrypted ciphertext blocks into memory and back into registers to XOR them with the subsequent plaintext blocks, though here the gains are more modest, around 1.3-1.5x.

After all of this work, this is how the results now look on Illumos, even inside of a VM:
Algorithm/Mode 128k ops
AES-128/CTR: 3121 MB/s
AES-128/CBC: 691 MB/s
AES-128/GCM: 1053 MB/s

  • So the CTR and GCM speeds have actually caught up to OpenSSL, and CBC is actually faster than OpenSSL.

On the decryption side of things, CBC decryption also jumped from 627 MB/s to 3011 MB/s. Seeing these performance numbers, you can see why I chose 32k for the operation size in between kernel preemption barriers. Even on the slowest hardware with AES-NI, we can expect at least 300-400 MB/s/core of throughput, so even in the worst case, we’ll be hogging the CPU for at most ~0.1ms per run.

Overall, we’re even a little bit faster than OpenSSL in some tests, though that’s probably down to us encrypting 128k blocks vs 8k in the “openssl speed” utility. Anyway, having fixed this monstrous atrocity of a performance bug, I can now finally get some sleep.

  • To made these tests repeatable, and to ensure that the changes didn’t break the crypto algorithms, Saso created a crypto_test kernel module.
  • I have recently created a FreeBSD version of crypto_test.ko, for much the same purposes
  • Initial performance on FreeBSD is not as bad, if you have the aesni.ko module loaded, but it is not up to speed with OpenSSL. You cannot directly compare to the benchmarks Saso did, because the CPUs are vastly different.
  • Performance results
  • I hope to do some more tests on a range of different sized CPUs in order to determine how the algorithms scale across different clock speeds.
  • I also want to look at, or get help and have someone else look at, implementing some of the same optimizations that Saso did.
  • It currently seems like there isn’t a way to perform addition crypto operations in the same session without regenerating the key table. Processing additional buffers in an existing session might offer a number of optimizations for bulk operations, although in many cases, each block is encrypted with a different key and/or IV, so it might not be very useful.

Brendan Gregg’s special freeware tools for sysadmins

  • These tools need to be in every (not so) serious sysadmins toolbox.
  • Triple ROT13 encryption algorithm (beware: export restrictions may apply)
  • /usr/bin/maybe, in case true and false don’t provide too little choice…
  • The bottom command lists you all the processes using the least CPU cycles.
  • Check out the rest of the tools.
  • You wrote similar tools and want us to cover them in the show? Send us an email to feedback@bsdnow.tv

A look at 2038

I remember the Y2K problem quite vividly. The world was going crazy for years, paying insane amounts of money to experts to fix critical legacy systems, and there was a neverending stream of predictions from the media on how it’s all going to fail. Most didn’t even understand what the problem was, and I remember one magazine writing something like the following:
Most systems store the current year as a two-digit value to save space. When the value rolls over on New Year’s Eve 1999, those two digits will be “00”, and “00” means “halt operation” in the machine language of many central processing units. If you’re in an elevator at this time, it will stop working and you may fall to your death.
I still don’t know why they thought a computer would suddenly interpret data as code, but people believed them. We could see a nearby hydropower plant from my parents’ house, and we expected it to go up in flames as soon as the clock passed midnight, while at least two airplanes crashed in our garden at the same time. Then nothing happened. I think one of the most “severe” problems was the police not being able to open their car garages the next day because their RFID tokens had both a start and end date for validity, and the system clock had actually rolled over to 1900, so the tokens were “not yet valid”.
That was 17 years ago. One of the reasons why Y2K wasn’t as bad as it could have been is that many systems had never used the “two-digit-year” representation internally, but use some form of “timestamp” relative to a fixed date (the “epoch”).
The actual problem with time and dates rolling over is that systems calculate timestamp differences all day. Since a timestamp derived from the system clock seemingly only increases with each query, it is very common to just calculate diff = now – before and never care about the fact that now could suddenly be lower than before because the system clock has rolled over. In this case diff is suddenly negative, and if other parts of the code make further use of the suddenly negative value, things can go horribly wrong.
A good example was a bug in the generator control units (GCUs) aboard Boeing 787 “Dreamliner” aircrafts, discovered in 2015. An internal timestamp counter would overflow roughly 248 days after the system had been powered on, triggering a shut down to “safe mode”. The aircraft has four generator units, but if all were powered up at the same time, they would all fail at the same time. This sounds like an overflow caused by a signed 32-bit counter counting the number of centiseconds since boot, overflowing after 248.55 days, and luckily no airline had been using their Boing 787 models for such a long time between maintenance intervals.
The “obvious” solution is to simply switch to 64-Bit values and call it day, which would push overflow dates far into the future (as long as you don’t do it like the IBM S/370 mentioned before). But as we’ve learned from the Y2K problem, you have to assume that computer systems, computer software and stored data (which often contains timestamps in some form) will stay with us for much longer than we might think. The years 2036 and 2038 might be far in the future, but we have to assume that many of the things we make and sell today are going to be used and supported for more than just 19 years. Also many systems have to store dates which are far in the future. A 30 year mortgage taken out in 2008 could have already triggered the bug, and for some banks it supposedly did.
sys_gettimeofday() is one of the most used system calls on a generic Linux system and returns the current time in form of an UNIX timestamp (time_t data type) plus fraction (suseconds_t data type). Many applications have to know the current time and date to do things, e.g. displaying it, using it in game timing loops, invalidating caches after their lifetime ends, perform an action after a specific moment has passed, etc. In a 32-Bit UNIX system, time_t is usually defined as a signed 32-Bit Integer.
When kernel, libraries and applications are compiled, the compiler will turn this assumption machine code and all components later have to match each other. So a 32-Bit Linux application or library still expects the kernel to return a 32-Bit value even if the kernel is running on a 64-Bit architecture and has 32-Bit compatibility. The same holds true for applications calling into libraries. This is a major problem, because there will be a lot of legacy software running in 2038. Systems which used an unsigned 32-Bit Integer for time_t push the problem back to 2106, but I don’t know about many of those.
The developers of the GNU C library (glibc), the default standard C library for many GNU/Linux systems, have come up with a design for year 2038 proofness for their library. Besides the time_t data type itself, a number of other data structures have fields based on time_t or the combined struct timespec and struct timeval types. Many methods beside those intended for setting and querying the current time use timestamps
32-Bit Windows applications, or Windows applications defining _USE_32BIT_TIME_T, can be hit by the year 2038 problem too if they use the time_t data type. The __time64_t data type had been available since Visual C 7.1, but only Visual C 8 (default with Visual Studio 2015) expanded time_t to 64 bits by default. The change will only be effective after a recompilation, legacy applications will continue to be affected.
If you live in a 64-Bit world and use a 64-Bit kernel with 64-Bit only applications, you might think you can just ignore the problem. In such a constellation all instances of the standard time_t data type for system calls, libraries and applications are signed 64-Bit Integers which will overflow in around 292 billion years. But many data formats, file systems and network protocols still specify 32-Bit time fields, and you might have to read/write this data or talk to legacy systems after 2038. So solving the problem on your side alone is not enough.


Beastie Bits

Feedback/Questions

applications crash more often due to errors than corruptions. In the case of corruption, a few applications (e.g., Log-Cabin, ZooKeeper) can use checksums and redundancy to recover, leading to a correct behavior; however, when the corruption is transformed into an error, these applications crash, resulting in reduced availability.


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Fast & the Firewall: Tokyo Drift | BSD Now 186 first appeared on Jupiter Broadcasting.

]]>
Exit Interview | BSD Now 185 https://original.jupiterbroadcasting.net/107556/exit-interview-bsd-now-185/ Thu, 16 Mar 2017 00:36:35 +0000 https://original.jupiterbroadcasting.net/?p=107556 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Interview – Kris Moore – kris@trueos.org / @pcbsdKris TrueOS founder, FreeNAS developer, BSD Now co-host Benedict Reuschling – bcr@freebsd.org / @bsdbcr FreeBSD commiter & FreeBSD […]

The post Exit Interview | BSD Now 185 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –


Interview – Kris Moore – kris@trueos.org / @pcbsdKris

  • TrueOS founder, FreeNAS developer, BSD Now co-host

Benedict Reuschling – bcr@freebsd.org / @bsdbcr

  • FreeBSD commiter & FreeBSD Foundation Vice President, BSD Now co-host

The post Exit Interview | BSD Now 185 first appeared on Jupiter Broadcasting.

]]>
Tokyo Dreaming | BSD Now 184 https://original.jupiterbroadcasting.net/107406/tokyo-dreaming-bsd-now-184/ Wed, 08 Mar 2017 03:46:21 +0000 https://original.jupiterbroadcasting.net/?p=107406 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines OpenBSD A2k17 hackathon reports a2k17 hackathon report: Patrick Wildt on the arm64 port a2k17 hackathon report: Antoine Jacoutot on syspatch, rc.d improvements and more […]

The post Tokyo Dreaming | BSD Now 184 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

OpenBSD A2k17 hackathon reports


NetBSD is now reproducible

  • Christos Zoulas posts to the NetBSD blog that he has completed his project to make fully reproducible NetBSD builds for amd64 and sparc64

I have been working on and off for almost a year trying to get reproducible builds (the same source tree always builds an identical cdrom) on NetBSD. I did not think at the time it would take as long or be so difficult, so I did not keep a log of all the changes I needed to make. I was also not the only one working on this. Other NetBSD developers have been making improvements for the past 6 years. I would like to acknowledge the NetBSD build system (aka build.sh) which is a fully portable cross-build system. This build system has given us a head-start in the reproducible builds work.

I would also like to acknowledge the work done by the Debian folks who have provided a platform to run, test and analyze reproducible builds. Special mention to the diffoscope tool that gives an excellent overview of what’s different between binary files, by finding out what they are (and if they are containers what they contain) and then running the appropriate formatter and diff program to show what’s different for each file.

Finally other developers who have started, motivated and did a lot of work getting us here like Joerg Sonnenberger and Thomas Klausner for their work on reproducible builds, and Todd Vierling and Luke Mewburn for their work on build.sh.

  • Some of the stumbling blocks that were overcome:
    • Timestamps
    • Date/time/author embedded in source files
    • Timezone sensitive code
    • Directory order / build order
    • Non-sanitized data stored in files
    • Symbolic links / paths
    • General tool inconsistencies: including gcc profiling, the fact that GPT partition tables, are by definition, globally unique each time they are created, and the iso9660 standard calls for a timestamp with a timezone.
    • Toolchain
    • Build information / tunables / environment. NetBSD now has a knob ‘MKREPRO’, if set to YES it sets a long list of variables to a consistent set of a values.
  • The post walks through how these problems where solves
  • Future Work:
    • Vary more parameters and find more inconsistencies
    • Verify that cross-building is reproducible
    • Verify that unprivileged builds are reproducible
    • Test on other platforms

Features are faults redux

  • From Ted Unangst

Last week I gave a talk for the security class at Notre Dame based on features are faults but with some various commentary added. It was an exciting trip, with the opportunity to meet and talk with the computer vision group as well. Some other highlights include the Indiana skillet I had for breakfast, which came with pickles and was amazing, and explaining the many wonders of cvs to the Linux users group over lunch. After that came the talk, which went a little something like this.

I got started with OpenBSD back about the same time I started college, although I had a slightly different perspective then. I was using OpenBSD because it included so many security features, therefore it must be the most secure system, right? For example, at some point I acquired a second computer. What’s the first thing anybody does when they get a second computer? That’s right, set up a kerberos domain. The idea that more is better was everywhere. This was also around the time that ipsec was getting its final touches, and everybody knew ipsec was going to be the most secure protocol ever because it had more options than any other secure transport. We’ll revisit this in a bit.

There’s been a partial attitude adjustment since then, with more people recognizing that layering complexity doesn’t result in more security. It’s not an additive process. There’s a whole talk there, about the perfect security that people can’t or won’t use. OpenBSD has definitely switched directions, including less code, not more. All the kerberos code was deleted a few years ago.

Let’s assume about one bug per 100 lines of code. That’s probably on the low end. Now say your operating system has 100 million lines of code. If I’ve done the math correctly, that’s literally a million bugs. So that’s one reason to avoid adding features. But that’s a solveable problem. If we pick the right language and the right compiler and the right tooling and with enough eyeballs and effort, we can fix all the bugs. We know how to build mostly correct software, we just don’t care.

As we add features to software, increasing its complexity, new unexpected behaviors start to emerge. What are the bounds? How many features can you add before craziness is inevitable? We can make some guesses. Less than a thousand for sure. Probably less than a hundred? Ten maybe? I’ll argue the answer is quite possibly two. Interesting corollary is that it’s impossible to have a program with exactly two features. Any program with two features has at least a third, but you don’t know what it is

My first example is a bug in the NetBSD ftp client. We had one feature, we added a second feature, and just like that we got a third misfeature

Our story begins long ago. The origins of this bug are probably older than I am. In the dark times before the web, FTP sites used to be a pretty popular way of publishing files. You run an ftp client, connect to a remote site, and then you can browse the remote server somewhat like a local filesystem. List files, change directories, get files. Typically there would be a README file telling you what’s what, but you don’t need to download a copy to keep. Instead we can pipe the output to a program like more. Right there in the ftp client. No need to disconnect.

Fast forward a few decades, and http is the new protocol of choice. http is a much less interactive protocol, but the ftp client has some handy features for batch downloads like progress bars, etc. So let’s add http support to ftp. This works pretty well. Lots of code reused.

http has one quirk however that ftp doesn’t have. Redirects. The server can redirect the client to a different file. So now you’re thinking, what happens if I download https://somefile and the server sends back 302 https://|reboot. ftp reconnects to the server, gets the 200, starts downloading and saves it to a file called |reboot. Except it doesn’t. The function that saves files looks at the first character of the name and if it’s a pipe, runs that command instead. And now you just rebooted your computer. Or worse.

It’s pretty obvious this is not the desired behavior, but where exactly did things go wrong? Arguably, all the pieces were working according to spec. In order to see this bug coming, you needed to know how the save function worked, you needed to know about redirects, and you needed to put all the implications together.

  • The post then goes into a lot more detail about other issues. We just don’t have time to cover it all today, but you should go read it, it is very enlightening

What do we do about this? That’s a tough question. It’s much easier to poke fun at all the people who got things wrong. But we can try. My attitudes are shaped by experiences with the OpenBSD project, and I think we are doing a decent job of containing the complexity. Keep paring away at dependencies and reducing interactions. As a developer, saying “no” to all feature requests is actually very productive. It’s so much faster than implementing the feature. Sometimes users complain, but I’ve often received later feedback from users that they’d come to appreciate the simplicity.

There was a question about which of these vulnerabilities were found by researchers, as opposed to troublemakers. The answer was most, if not all of them, but it made me realize one additional point I hadn’t mentioned. Unlike the prototypical buffer overflow vulnerability, exploiting features is very reliable. Exploiting something like shellshock or imagetragick requires no customized assembly and is independent of CPU, OS, version, stack alignment, malloc implementation, etc. Within about 24 hours of the initial release of shellshock, I had logs of people trying to exploit it. So unless you’re on about a 12 hour patch cycle, you’re going to have a bad time.


reimplement zfsctl (.zfs) support

  • avg@ (Andriy Gapon) has rewritten the .zfs support in FreeBSD

The current code is written on top of GFS, a library with the generic support for writing filesystems, which was ported from Illumos. Because of significant differences between illumos VFS and FreeBSD VFS models, both the GFS and zfsctl code were heavily modified to work on FreeBSD. Nonetheless, they still contain quite a few ugly hacks and bugs.

This is a reimplementation of the zfsctl code where the VFS-specific bits are written from scratch and only the code that interacts with the rest of ZFS is reused.

Some ideas are picked from an independent work by Will (wca@)

  • This work improves the overall quality of the ZFS port to FreeBSD

The code that provides support for ZFS .zfs/ directory functionality has been reimplemented. It is no longer possible to create a snapshot by mkdir under .zfs/snapshot/. That should be the only user visible change.

  • TIL: On IllumOS, you can create, rename, and destroy snapshots, by manipulating the virtual directories in the .zfs/snapshots directory.

  • If enough people would find this feature useful, maybe it could be implemented (rm and rename have never existed on FreeBSD). At the same time, it seems like rather a lot of work, when the ZFS command line tools work so well. Although wca@ pointed out on IRC, it can be useful to be able to create a snapshot over NFS, or SMB.


Interview – Konrad Witaszczyk – def@freebsd.org

  • Encrypted Kernel Crash Dumps

News Roundup

PBKDF2 Performance improvements on FreeBSD

  • Joe Pixton did some research and found that, because of the way the spec is written, most PBKDF2 implementations are 2x slower than they need to be.
  • Since the PBKDF is used to derive a key, used for encryption, this poses a problem. The attacker can derive a key twice as fast as you can. On FreeBSD the PBKDF2 was configured to derive a SHA512-HMAC key that would take approximately 2 seconds to calculate. That is 2 seconds on one core. So an attacker can calculate the same key in 1 second, and use many cores.
  • Luckily, 1 second is still a long time for each brute force guess. On modern CPUs with the fast algorithm, you can do about 500,000 iterations of PBKDF per second (per core).
  • Until a recent change, OpenBSD used only 8192 iterations. It now uses a similar benchmark of ~2 seconds, and uses bcrypt instead of a SHA1-HMAC.
  • Joe’s research showed that the majority of implementations were done the ‘slow’ way. Calculating the initial part of the outer round each iteration, instead of reusing the initial calculation over and over for each round.
  • Joe submitted a match to FreeBSD to solve this problem. That patch was improved, and a test of tests were added by jmg@, but then work stalled
  • I picked up the work, and fixed some merge conflicts in the patch that had cropped up based on work I had done that moved the HMAC code to a separate file.
  • This work is now committed.

With this change, all newly generated GELI keys will be approximately 2x as strong. Previously generated keys will take half as long to calculate, resulting in faster mounting of encrypted volumes. Users may choose to rekey, to generate a new key with the larger default number of iterations using the geli(8) setkey command. Security of existing data is not compromised, as ~1 second per brute force attempt is still a very high threshold.

  • If you are interested in the topic, I recommend the video of Joe’s presentation from the Passwords15 conference in Las Vegas

Quick How-To: Updating a screenshot in the TrueOS Handbook

  • Docs writers, might be time to pay attention. This week we have a good walk-through of adding / updating new screenshots to the TrueOS Sphinx Documentation.
  • For those who have not looked in the past, TrueOS and FreeNAS both have fantastic docs by the team over at iXsystems using Sphinx as their doc engine.
  • Often we get questions from users asking what “they can do to help” but don’t necessarily have programming skills to apply.
  • The good news is that using Sphinx is relatively easy, and after learning some minio rst syntax you can easily help fix, or even contribute to new sections of the TrueOS (Or FreeNAS) documentation.
  • In this example, Tim takes us through the process of replacing an old out of date screenshot in the handbook with the latest hotness.
  • Starting with a .png file, he then locates the old screenshot name and adds the updated version “lumina-e.png” to “lumina-f.png”. With the file added to the tree, the relevant section of .rst code can be adjusted and the sphinx build run to verify the output HTML looks correct.
  • Using this method you can easily start to get involved with other aspects of documentation and next thing you know you’ll be writing boot-loaders like Allan!

Learn C Programming With 9 Excellent Open Source Books

  • Now that you’ve easily mastered all your documentation skills, you may be ready to take on a new challenge. (Come on, that boot-loader isn’t going to write itself!)
  • We wanted to point out some excellent resources to get you started on your journey into writing C.
  • Before you think, “oh, more books to purchase”, wait there’s good news. These are the top-9 open-source books that you can download in digital form free of charge. Now I bet we got your attention.
  • We start the rundown with “The C Book”, by Mike Banahan, Declan Brady and Mark Doran, which will lay the groundwork with your introduction into the C language and concepts.
  • Next up, if you are going to do anything, do it with style, so take a read through the “C Elements of Style” which will make you popular at all the parties. (We can’t vouch for that statement)
  • From here we have a book on using C to build your own minimal “lisp” interpreter, reference guides on GNU C and some other excellent introduction / mastery books to help round-out your programming skill set.
  • Your C adventure awaits, hopefully these books can not only teach you good C, but also make you feel confident when looking at bits of the FreeBSD world or kernel with a proper foundation to back it up.

Running a Linux VM on OpenBSD

  • Over the past few years we’ve talked a lot about Virtualization, Bhyve or OpenBSD’s ‘vmm’, but qemu hasn’t gotten much attention.
  • Today we have a blog post with details on how to deploy qemu to run Linux on top of an OpenBSD host system.
  • The starts by showing us how to first provision the storage for qemu, using the handy ‘qemu-img’ command, which in this example only creates a 4GB disk, you’ll probably want more for real-world usage though.
  • Next up the qemu command will be run, pay attention to the particular flags for network and memory setup. You’ll probably want to bump it up past the recommended 256M of memory.
  • Networking is always the fun part, as the author describes his intended setup

I want OpenBSD and Debian to be able to obtain an IP via DHCP on their wired interfaces and I don’t want external networking required for an NFS share to the VM. To accomplish this I need two interfaces since dhclient will erase any other IPv4 addresses already assigned. We can’t assign an address directly to the bridge, but we can configure a virtual Ethernet device and add it.

  • The setup for this portion involves touching a few more files, but isn’t that painless. Some “pf” rules to enable NAT for and dhcpd setup to assign a “fixed” IP to the vm will get us going, along with some additional details on how to configure the networking for inside the debian VM.
  • Once those steps are completed you should be able to mount NFS and share data from the host to the VM painlessly.

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Tokyo Dreaming | BSD Now 184 first appeared on Jupiter Broadcasting.

]]>
Getting Steamy Here | BSD Now 183 https://original.jupiterbroadcasting.net/107231/getting-steamy-here-bsd-now-183/ Tue, 28 Feb 2017 22:24:00 +0000 https://original.jupiterbroadcasting.net/?p=107231 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines playonbsd with TrueOS: It’s Getting Steamy in Here and I’ve Had Too Much Wine We’ve done a couple of tutorials in the past on […]

The post Getting Steamy Here | BSD Now 183 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

playonbsd with TrueOS: It’s Getting Steamy in Here and I’ve Had Too Much Wine

We’ve done a couple of tutorials in the past on using Steam and Wine with PC-BSD, but now with the addition of playonbsd to the AppCafe library, you have more options than ever before to game on your TrueOS system. We’re going to have a look today at playonbsd, how it works with TrueOS, and what you can expect if you want to give it a try on your own system. Let’s dive right in!

Once playonbsd is installed, go back to your blank desktop, right-click on the wallpaper, and select terminal. Playonbsd does almost all the configuring for you, but there are still a couple of simple options you’ll want to configure to give yourself the best experience. In your open terminal, type: playonbsd. You can also find playonbsd by doing a fast search using Lumina’s built-in search function in the start menu after it’s been installed. Once opened, a graphical interface greets us with easy to navigate menus and even does most of the work for you.

  • A nice graphical UI that hides the complexity of setting up WINE and Steam, and lets you pick select the game you want, and get it setup
  • Start gaming quicker, without the headache

If you’re a PC gamer, you should definitely give playonbsd a try! You may be surprised at how well it works. If you want to know ahead of time if your games are well supported or not, head on over to WineHQ and do a search. Many people have tested and provided feedback and even solutions for potential problems with a large variety of video games. This is a great resource if you run into a glitch or other problem.


Weird Unix thing: ‘cd //’

  • So why can you do ‘cd //tmp’, and it isn’t the same as ‘cd /tmp’?
  • The spec says:

An implementation may further simplify curpath by removing any trailing characters that are not also leading characters, replacing multiple non-leading consecutive characters with a single , and replacing three or more leading characters with a single . If, as a result of this canonicalization, the curpath variable is null, no further steps shall be taken.

  • “So! We can replace “three or more leading / characters with a single slash”. That does not say anything about what to do when there are 2 / characters though, which presumably is why cd //tmp leaves you at //tmp.”

A pathname that begins with two successive slashes may be interpreted in an implementation-defined manner

  • So what is it for? Well, the blog did a bit of digging and came up with this stackoverflow answer
  • In cygwin and some other systems // is treated as a unix-ified version of \, to access UNC windows file sharing paths like \server\share
  • Perforce, the vcs, uses // to denote a path relative to the depot
  • It seems to have been used in the path for a bunch of different network file systems, but also for myriad other things

Testing out snapshots in Apple’s next-generation APFS file system

  • Adam Leventhal takes his DTrace hammer to Apple’s new file system to see what is going on

Back in June, Apple announced its new upcoming file system: APFS, or Apple File System. There was no mention of it in the WWDC keynote, but devotees needed no encouragement. They picked over every scintilla of data from the documentation on Apple’s developer site, extrapolating, interpolating, eager for whatever was about to come. In the WWDC session hall, the crowd buzzed with a nervous energy, eager for the grand unveiling of APFS. I myself badge-swapped my way into the conference just to get that first glimpse of Apple’s first original filesystem in the 30+ years since HFS

Apple’s presentation didn’t disappoint the hungry crowd. We hoped for a modern filesystem, optimized for next generation hardware, rich with features that have become the norm for data centers and professionals. With APFS, Apple showed a path to meeting those expectations. Dominic Giampaolo and Eric Tamura, leaders of the APFS team, shared performance optimizations, data integrity design, volume management, efficient storage of copied data, and snapshots—arguably the feature of APFS most directly in the user’s control.

It’s 2017, and Apple already appears to be making good on its promise with the revelation that the forthcoming iOS 10.3 will use APFS. The number of APFS tinkerers using it for their personal data has instantly gone from a few hundred to a few million. Beta users of iOS 10.3 have already made the switch apparently without incident. They have even ascribed unscientifically-significant performance improvements to APFS.

  • Previously Adam had used DTrace to find a new syscall introduced in OS X, fs_snapshot, but he had not dug into how to use it. Now it seems, the time has come

Learning from XNU and making some educated guesses, I wrote my first C program to create an APFS snapshot. This section has a bit of code, which you can find in this Github repo

  • That just returned “fs_snapshot: Operation not permitted”
  • So, being Adam, he used DTrace to figure out what the problem was

Running this DTrace script in one terminal while running the snapshot program in another shows the code flow through the kernel as the program executes

In the code flow, the priv_check_cred() function jumps out as a good place to continue because of its name, the fact that fs_snapshot calls it directly, and the fact that it returns 1 which corresponds with EPERM, the error we were getting.

  • Turns out, it just requires some sudo

With a little more testing I wrote my own version of Apple’s unreleased snapUtil command from the WWDC demo

We figured out the proper use of the fs_snapshot system call and reconstructed the WWDC snapUtil. But all this time an equivalent utility has been lurking on macOS Sierra. If you look in /System/Library/Filesystems/apfs.fs/Contents/Resources/, Apple has included a number of APFS-related utilities, including apfs_snapshot (and, tantalizingly, a tool called hfs_convert).

Snapshots let you preserve state to later peruse; we can also revert an APFS volume to a previous state to restore its contents. The current APFS semantics around rollback are a little odd. The revert operation succeeds, but it doesn’t take effect until the APFS volume is next mounted

Another reason Apple may not have wanted people messing around with snapshots is that the feature appears to be incomplete. Winding yourself into a state where only a reboot can clear a mounted snapshot is easy, and using snapshots seems to break some of the diskutil APFS output

  • It is interesting to see what you can do with DTrace, as well as to see what a DTrace and ZFS developer things of APFS

Interview – Tom Jones – tj@enoti.me

  • Replacing the BSD Sockets API

News Roundup

FreeBSD rc.d script to map ethernet device names by MAC address

Self-contained FreeBSD rc.d script for re-naming devices based on their MAC address. I needed it due to USB Ethernet devices coming up in different orders across OS upgrades.

  • Copy ethname into /usr/local/etc/rc.d/
  • Add the following to rc.conf:

    ethname_enable=”YES”
    ethname_devices=”em0 ue0 ue1″ # Replace with desired devices to rename

  • Create /usr/local/etc/ifmap in the following format:

    01:23:45:67:89:ab eth0
    01:23:45:67:89:ac eth1

That’s it. Use ifconfig_=”” settings in rc.conf with the new names.

  • I know MFSBSD has something like this, but a polished up hybrid of the two should likely be part of the base system if something is not already available
  • This would be a great “Junior Job”, if say, a viewer wanted to get started with their first FreeBSD patch

Mog: A different take on the Unix tool cat

  • Do you abuse cat to view files?
  • Did you know cat is meant for concatenating files, meaning: cat part1 part2 part3 > wholething.txt
  • mog is a tool for actually viewing files, and it adds quite a few nice features
    • Syntax highlight scripts
    • Print a hex dump of binary files
    • Show details of image files
    • Perform objdump on executables
    • List a directory

mog reads the $HOME/.mogrc config file which describes a series of operations it can do in an ordered manner. Each operation has a match command and an action command. For each file you give to mog it will test each match command in turn, when one matches it will perform the action. A reasonably useful config file is generated when you first run it.


How Unix erases things when you type a backspace while entering text

Yesterday I mentioned in passing that printing a DEL character doesn’t actually erase anything. This raises an interesting question, because when you’re typing something into a Unix system and hit your backspace key, Unix sure erases the last character that you entered. So how is it doing that?

The answer turns out to be basically what you’d expect, although the actual implementation rapidly gets complex. When you hit backspace, the kernel tty line discipline rubs out your previous character by printing (in the simple case) Ctrl-H, a space, and then another Ctrl-H.

Of course just backing up one character is not always the correct way of erasing input, and that’s when it gets complicated for the kernel. To start with we have tabs, because when you (the user) backspace over a tab you want the cursor to jump all the way back, not just move back one space. The kernel has a certain amount of code to work out what column it thinks you’re on and then back up an appropriate number of spaces with Ctrl-Hs.

Then we have the case when you quoted a control character while entering it, eg by typing Ctrl-V Ctrl-H; this causes the kernel to print the Ctrl-H instead of acting on it, and it prints it as the two character sequence ^H. When you hit backspace to erase that, of course you want both (printed) characters to be rubbed out, not just the ‘H’. So the kernel needs to keep track of that and rub out two characters instead of just one.

  • Chris then provides an example, from IllumOS, of the kernel trying to deal with multibyte characters

FreeBSD also handles backspacing a space specially, because you don’t need to actually rub that out with a ‘\b \b’ sequence; you can just print a plain \b. Other kernels don’t seem to bother with this optimization. The FreeBSD code for this is in sys/kern/tty_ttydisc.c in the ttydisc_rubchar function

PS: If you want to see the kernel’s handling of backspace in action, you usually can’t test it at your shell prompt, because you’re almost certainly using a shell that supports command line editing and readline and so on. Command line editing requires taking over input processing from the kernel, and so such shells are handling everything themselves. My usual way to see what the kernel is doing is to run ‘cat >/dev/null’ and then type away.

  • And you thought the backspace key would be simple…

FreeBSD ports now have Wayland

  • We’ve discussed the pending Wayland work, but we wanted to point you today to the ports which are in mainline FreeBSD ports tree now.
  • First of all, (And I was wondering how they would deal with this) it has landed in the “graphics” category, since Wayland is the Anti-X11, putting it in x11/ didn’t make a lot of sense.
  • Couple of notes before you start installing new packages and expecting wayland to “just work”
  • First, this does require that you have working DRM from the kernel side. You’ll want to grab TrueOS or build from Matt Macy’s FreeBSD branches on GitHub before testing on any kind of modern Intel GPU. Nvidia with modesetting should be supported.
  • Next, not all desktops will “just work”. You may need to grab experimental Weston for compositor. KDE / Gnome (And Lumina) and friends will grow Wayland support in the future, so don’t expect to just fire up $whatever and have it all work out of box.
  • Feedback is needed! This is brand new functionality for FreeBSD, and the maintainers will want to hear your results. For us on the TrueOS side we are interested as well, since we want to port Lumina over to Wayland soon(ish)
  • Happy Experimenting!

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Getting Steamy Here | BSD Now 183 first appeared on Jupiter Broadcasting.

]]>
Bloaty McBloatface | BSD Now 182 https://original.jupiterbroadcasting.net/107061/bloaty-mcbloatface-bsd-now-182/ Wed, 22 Feb 2017 21:49:52 +0000 https://original.jupiterbroadcasting.net/?p=107061 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines OpenBSD changes of note 6 OpenBSD can now be cross built with clang. Work on this continues Build ld.so with -fno-builtin because otherwise clang […]

The post Bloaty McBloatface | BSD Now 182 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

OpenBSD changes of note 6

  • OpenBSD can now be cross built with clang. Work on this continues

Build ld.so with -fno-builtin because otherwise clang would optimize the local versions of functions like _dl_memset into a call to memset, which doesn’t exist.
Add connection timeout for ftp (http). Mostly for the installer so it can error out and try something else.
Complete https support for the installer.

  • I wonder how they handle certificate verification. I need to look into this as I’d like to switch the FreeBSD installer to this as well

New ocspcheck utility to validate a certificate against its ocsp responder.
net lock here, net lock there, net lock not quite everywhere but more than before.
More per cpu counters in networking code as well.
Disable and lock Silicon Debug feature on modern Intel CPUs.
Prevent wireless frame injection attack described at 33C3 in the talk titled “Predicting and Abusing WPA2/802.11 Group Keys” by Mathy Vanhoef.
Add support for multiple transmit ifqueues per network interface. Supported drivers include bge, bnx, em, myx, ix, hvn, xnf.
pledge now tracks when a file as opened and uses this to permit or deny ioctl.
Reimplement httpd’s support for byte ranges. Fixes a memory DOS.


FreeBSD 2016Q4 Status Report

  • An overview of some of the work that happened in October – December 2016
  • The ports tree saw many updates and surpassed 27,000 ports
  • The core team was busy as usual, and the foundation attended and/or sponsored a record 24 events in 2016.
  • CEPH on FreeBSD seems to be coming along nicely. For those that do not know, CEPH is a distributed filesystem that can sit on top of another filesystem. That is, you can use it to create a clustered filesystem out of a bunch of ZFS servers. Would love to have some viewers give it a try and report back.
  • OpenBSM, the FreeBSD audit framework, got some updates
  • Ed Schouten committed a front end to export sysctl data in a format usable by Prometheus, the open source monitoring system. This is useful for other monitoring software too.
  • Lots of updates for various ARM boards
  • There is an update on Reproducible Builds in FreeBSD, “ It is now possible to build the FreeBSD base system (kernel and userland) completely reproducibly, although it currently requires a few non-default settings”, and the ports tree is at 80% reproducible
  • Lots of toolchain updates (gcc, lld, gdb)
  • Various updates from major ports teams

Amazon rolls out IPv6 support on EC2

A few hours ago Amazon announced that they had rolled out IPv6 support in EC2 to 15 regions — everywhere except the Beijing region, apparently. This seems as good a time as any to write about using IPv6 in EC2 on FreeBSD instances.
First, the good news: Future FreeBSD releases will support IPv6 “out of the box” on EC2. I committed changes to HEAD last week, and merged them to the stable/11 branch moments ago, to have FreeBSD automatically use whatever IPv6 addresses EC2 makes available to it.
Next, the annoying news: To get IPv6 support in EC2 from existing FreeBSD releases (10.3, 11.0) you’ll need to run a few simple commands. I consider this unfortunate but inevitable: While Amazon has been unusually helpful recently, there’s nothing they could have done to get support for their IPv6 networking configuration into FreeBSD a year before they launched it.

  • You need the dual-dhclient port:

pkg install dual-dhclient

  • And the following lines in your /etc/rc.conf:

ifconfig_DEFAULT=”SYNCDHCP accept_rtadv”
ipv6_activate_all_interfaces=”YES”
dhclient_program=”/usr/local/sbin/dual-dhclient”

It is good to see FreeBSD being ready to use this feature on day 0, not something we would have had in the past

Finally, one important caveat: While EC2 is clearly the most important place to have IPv6 support, and one which many of us have been waiting a long time to get, this is not the only service where IPv6 support is important. Of particular concern to me, Application Load Balancer support for IPv6 is still missing in many regions, and Elastic Load Balancers in VPC don’t support IPv6 at all — which matters to those of us who run non-HTTP services. Make sure that IPv6 support has been rolled out for all the services you need before you start migrating.

  • Colin’s blog also has the details on how to actually activate IPv6 from the Amazon side, if only it was as easy as configuring it on the FreeBSD side

FreeBSD’s George Neville-Neil tries valiantly for over an hour to convince a Linux fan of the error of their ways

In today’s episode of the Lunduke Hour I talk to George Neville-Neil — author and FreeBSD advocate. He tries to convince me, a Linux user, that FreeBSD is better.

  • They cover quite a few topics, including:
    • licensing, and the motivations behind it
    • vendor relations
    • community
    • development model
    • drivers and hardware support
  • George also talks about his work with the FreeBSD Foundation, and the book he co-authored, “The Design and Implementation of the FreeBSD Operating System, 2nd Edition”

News Roundup

An interactive script that makes it easy to install 50+ desktop environments following a base install of FreeBSD 11

  • And I thought I was doing good when I wrote a patch for the installer that enables your choice of 3 desktop environments…

This is a collection of scripts meant to install desktop environments on unix-like operating systems following a base install. I call one of these ‘complete’ when it meets the following requirements:

  • A graphical logon manager is presented without user intervention after powering on the machine
  • Logging into that graphical logon manager takes the user into the specified desktop environment
  • The user can open a terminal emulator
  • I need to revive my patch, and add Lumina to it

Firefox 51 on sparc64 – we did not hit the wall yet

  • A NetBSD developers tells the story of getting Firefox 51 running on their sparc64 machine
  • It turns out the bug impacted amd64 as well, so it was quickly fixed
  • They are a bit less hopeful about the future, since Firefox will soon require rust to compile, and rust is not working on sparc64 yet
  • Although there has been some activity on the rust on sparc64 front, so maybe there is hope
  • The post also look at a few alternative browsers, but it not hopeful

Introducing Bloaty McBloatface: a size profiler for binaries

I’m very excited to announce that today I’m open-sourcing a tool I’ve been working on for several months at Google. It’s called Bloaty McBloatface, and it lets you explore what’s taking up space in your .o, .a, .so, and executable binary files.
Bloaty is available under the Apache 2 license. All of the code is available on GitHub: github.com/google/bloaty. It is quick and easy to build, though it does require a somewhat recent compiler since it uses C++11 extensively. Bloaty primarily supports ELF files (Linux, BSD, etc) but there is some support for Mach-O files on OS X too. I’m interested in expanding Bloaty’s capabilities to more platforms if there is interest!

  • I need to try this one some of the boot code files, to see if there are places we can trim some fat

We’ve been using Bloaty a lot on the Protocol Buffers team at Google to evaluate the binary size impacts of our changes. If a change causes a size increase, where did it come from? What sections/symbols grew, and why? Bloaty has a diff mode for understanding changes in binary size

  • The diff mode looks especially interesting. It might be worth setting up some kind of CI testing that alerts if a change results in a significant size increase in a binary or library

A BSD licensed mdns responder

  • One of the things we just have to deal with in the modern world is service and system discovery. Many of us have fiddled with avahi or mdnsd and related “mdns” services.
  • For various reasons those often haven’t been the best-fit on BSD systems.
  • Today we have a github project to point you at, which while a bit older, has recently been updated with pledge() support for OpenBSD.
  • First of all, why do we need an alternative? They list their reasons:

This is an attempt to bring native mdns/dns-sd to OpenBSD. Mainly cause all the other options suck and proper network browsing is a nice feature these days.

Why not Apple’s mdnsd ?
1 – It sucks big time.
2 – No BSD License (Apache-2).
3 – Overcomplex API.
4 – Not OpenBSD-like.

Why not Avahi ?
1 – No BSD License (LGPL).
2 – Overcomplex API.
3 – Not OpenBSD-like
4 – DBUS and lots of dependencies.

  • Those already sound like pretty compelling reasons. What makes this “new” information again is the pledge support, and perhaps it’s time for more BSD’s to start considering importing something like mdnsd into their base system to make system discovery more “automatic”

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Bloaty McBloatface | BSD Now 182 first appeared on Jupiter Broadcasting.

]]>
The Cantrillogy | BSD Now 181 https://original.jupiterbroadcasting.net/106911/the-cantrillogy-bsd-now-181/ Wed, 15 Feb 2017 11:00:32 +0000 https://original.jupiterbroadcasting.net/?p=106911 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – FOSDEM 2017 BSD Dev Room Videos Ubuntu Slaughters Kittens | BSD Now 103 The Cantrill Strikes Back | BSD Now 117 Return of the Cantrill […]

The post The Cantrillogy | BSD Now 181 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post The Cantrillogy | BSD Now 181 first appeared on Jupiter Broadcasting.

]]>
Illuminating the desktop | BSD Now 180 https://original.jupiterbroadcasting.net/106756/illuminating-the-desktop-bsd-now-180/ Wed, 08 Feb 2017 02:01:52 +0000 https://original.jupiterbroadcasting.net/?p=106756 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Interview – Ken Moore – ken@trueos.org TrueOS, Lumina, Sys Admin, The BSD Desktop Ecosystem Send questions, comments, show ideas/topics, or stories you want mentioned on […]

The post Illuminating the desktop | BSD Now 180 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Interview – Ken Moore – ken@trueos.org

  • TrueOS, Lumina, Sys Admin, The BSD Desktop Ecosystem

  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv


The post Illuminating the desktop | BSD Now 180 first appeared on Jupiter Broadcasting.

]]>
The Wayland Machine | BSD Now 179 https://original.jupiterbroadcasting.net/106601/the-wayland-machine-bsd-now-179/ Thu, 02 Feb 2017 00:50:06 +0000 https://original.jupiterbroadcasting.net/?p=106601 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines Wayland is now in the FreeBSD Ports tree This commit brings Wayland, the new windowing system, into the FreeBSD ports tree “This port was […]

The post The Wayland Machine | BSD Now 179 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

Wayland is now in the FreeBSD Ports tree

  • This commit brings Wayland, the new windowing system, into the FreeBSD ports tree
  • “This port was first created by Koop Mast (kwm@) then updated and improved by Johannes Lundberg”
  • “Wayland is intended as a simpler replacement for X, easier to develop and maintain. GNOME and KDE are expected to be ported to it.”
  • Wayland is designed for desktop and laptop use, rather than X, which was designed for use over the network, where clients were not powerful enough to run the applications locally.
  • “Wayland is a protocol for a compositor to talk to its clients as well as a C library implementation of that protocol. The compositor can be a standalone display server running on Linux kernel modesetting and evdev input devices, an X application, or a wayland client itself. The clients can be traditional applications, X servers (rootless or fullscreen) or other display servers.”
  • “Please report bugs to the FreeBSD bugtracker!”
  • It is good to see this project progressing, as it seems in a few generations, high performance graphics drivers may only be actively developed for Wayland.

Call For Testing: xorg 1.18.4 and newer intel/ati DDX

  • Baptiste Daroussin, and the FreeBSD X11 team, have issued a call for testing for the upgrade to Xorg 1.18.4
  • Along with it comes newer ATI/AMD and Intel drivers
  • “Note that you will need to rebuild all the xf86-* packages to work with that
    newer xorg (hence the bump of the revision)”
  • “Do not expect newer gpu supported as this is not the kernel part”, it only provides the newer Xorg driver, not the kernel mode setting driver (this is a separate project)
  • “If you experience any issue with intel or radeon driver please try to use the new modesetting driver provided by xorg directly (note that fedora and debian recommend the use of the new driver instead of the ati/intel one)”

Error handling in C

  • “Unlike other languages which have one preferred means of signalling an error, C is a multi error paradigm language. Error handling styles in C can be organized into one of several distinct styles, such as popular or correct. Some examples of each.”
    • “One very popular option is the classic unix style. -1 is returned to indicate an error.”
    • “Another option seen in the standard C library is NULL for errors.”
    • “The latter has the advantage that NULL is a false value, which makes it easier to write logical conditions. File descriptor 0 is valid (stdin) but false, while -1 is invalid but true.”
    • “And of course, there’s the worst of both worlds approach requiring a special sentinel that you’ll probably forget to use”
    • “Other unix functions, those that don’t need to return a file descriptor, stick to just 0 and -1”
    • “Of course, none of these functions reveal anything about the nature of the error. For that, you need to consult the errno on the side”
  • The article goes on to describe different ways of dealing with the issue, and return values.
  • There is also coverage of more complex examples and involve a context that might contain the error message
  • It is really interesting to see the differences, and the pitfalls of each approach

Fixing POSIX Filenames

  • “Traditionally, Unix/Linux/POSIX pathnames and filenames can be almost any sequence of bytes. A pathname lets you select a particular file, and may include zero or more “/” characters. Each pathname component (separated by “/”) is a filename; filenames cannot contain “/”. Neither filenames nor pathnames can contain the ASCII NUL character (\0), because that is the terminator.”
  • “This lack of limitations is flexible, but it also creates a legion of unnecessary problems. In particular, this lack of limitations makes it unnecessarily difficult to write correct programs (enabling many security flaws). It also makes it impossible to consistently and accurately display filenames, causes portability problems, and confuses users.”
  • “This article will try to convince you that adding some tiny limitations on legal Unix/Linux/POSIX filenames would be an improvement. Many programs already presume these limitations, the POSIX standard already permits such limitations, and many Unix/Linux filesystems already embed such limitations — so it’d be better to make these (reasonable) assumptions true in the first place. This article will discuss, in particular, the three biggest problems: control characters in filenames (including newline, tab, and escape), leading dashes in filenames, and the lack of a standard character encoding scheme (instead of using UTF-8). These three problems impact programs written in any language on Unix/Linux/POSIX system. There are other problems, of course. Spaces in filenames can cause problems; it’s probably hopeless to ban them outright, but resolving some of the other issues will simplify handling spaces in filenames. For example, when using a Bourne shell, you can use an IFS trick (using IFS=printf '\n\t') to eliminate some problems with spaces. Similarly, special metacharacters in filenames cause some problems; I suspect few if any metacharacters could be forbidden on all POSIX systems, but it’d be great if administrators could locally configure systems so that they could prevent or escape such filenames when they want to. I then discuss some other tricks that can help.”
  • “After limiting filenames slightly, creating completely-correct programs is much easier, and some vulnerabilities in existing programs disappear. This article then notes some others’ opinions; I knew that some people wouldn’t agree with me, but I’m heartened that many do agree that something should be done. Finally, I briefly discuss some methods for solving this long-term; these include forbidding creation of such names (hiding them if they already exist on the underlying filesystem), implementing escaping mechanisms, or changing how tools work so that these are no longer problems (e.g., when globbing/scanning, have the libraries prefix “./” to any filename beginning with “-”). Solving this is not easy, and I suspect that several solutions will be needed. In fact, this paper became long over time because I kept finding new problems that needed explaining (new “worms under the rocks”). If I’ve convinced you that this needs improving, I’d like your help in figuring out how to best do it!”
  • “Filename problems affect programs written in any programming language. However, they can be especially tricky to deal with when using Bourne shells (including bash and dash). If you just want to write shell programs that can handle filenames correctly, you should see the short companion article Filenames and Pathnames in Shell: How to do it correctly.”
  • Imagine that you don’t know Unix/Linux/POSIX (I presume you really do), and that you’re trying to do some simple tasks. For our purposes we will create simple scripts on the command line (using a Bourne shell) for these tasks, though many of the underlying problems affect any program. For example, let’s try to print out the contents of all files in the current directory, putting the contents into a file in the parent directory:
    • cat * > ../collection # WRONG
    • cat ./* > ../collection # CORRECT
    • cat find . -type f > ../collection # WRONG
    • ( set -f ; for file in find . -type f ; do # WRONG
      cat “$file”
      done ) > ../collection
    • ( find . -type f | xargs cat ) > ../collection # WRONG, WAY WRONG
  • Just think about trying to remove a file named: -rf /

News Roundup

OpenBSD ARM64

  • A new page has appeared on the OpenBSD website, offering images for ARM64
  • “The current target platforms are the Pine64 and the Raspberry Pi 3.”
  • “OpenBSD/arm64 bundles various platforms sharing the 64-bit ARM architecture. Due to the fact that there are many System on a Chips (SoC) around, OpenBSD/arm64 differentiates between various SoCs and may have a different level of support between them”
  • The page contains a list of the devices that are supported, and which components have working drivers
  • At the time of recording, the link to download the snapshots did not work yet, but by time this airs a week from now, it should be working.

The design of Chacha20

  • Seems like every few episodes we end up discussing Ciphers (With their o-so amusing naming) and today is no exception.
  • We have a great writeup on the D & I of the ‘chacha20’ cipher written by “Loup Vaillant”
  • First of all, is this story for you? Maybe the summary will help make that call:

Quick summary: Chacha20 is ARX-based hash function, keyed, running in counter mode. It embodies the idea that one can use a hash function to encrypt data.

  • If your eyes didn’t glaze over, then you are cleared to proceed.
  • Chacha20 is built around stream ciphers:

While Chacha20 is mainly used for encryption, its core is a pseudo-random number generator. The cipher text is obtained by XOR’ing the plain text with a pseudo-random stream:
cipher_text = plain_text XOR chacha_stream(key, nonce)

Provided you never use the same nonce with the same key twice, you can treat that stream as a one time pad. This makes it very simple: unlike block ciphers, you don’t have to worry about padding, and decryption is the same operation as encryption:
plain_text = cipher_text XOR chacha_stream(key, nonce)

Now we just have to get that stream.

  • The idea that the streams can mimic the concept of a one-time pad does make chacha20 very attractive, even to a non-crypto guy such as myself.
  • From here the article goes into depth on how the cipher scrambles 512bit blocks using the quarter-round method (A forth of a block or 4 32bit numbers)
  • Some ascii art is used here to help visualize how this done, in the quarter round-phase, then to the complete block as the 4 quarters are run in parallel over the entire 512 bit block.
  • From here the article goes more into depth, looking at the complete chacha block, and the importance of a seemingly unnecessary 32byte constant (Hint: it’s really important)
  • If crypto is something you find fascinating, you’ll want to make sure you give this one a full read-through.

CyberChef – Coming to a FreeBSD Ports tree near you

  • Dan Langille tweets that he will be creating a port of GCHQ’s CyberChef tool
  • “CyberChef is a simple, intuitive web app for carrying out all manner of “cyber” operations within a web browser. These operations include creating hexdumps, simple encoding like XOR or Base64, more complex encryption like AES, DES and Blowfish, data compression and decompression, calculating hashes and checksums, IPv6 and X.509 parsing, and much more.”
  • “The tool is designed to enable both technical and non-technical analysts to manipulate data in complex ways without having to deal with complex tools or algorithms. It was conceived, designed, built and incrementally improved by an analyst in their 10% innovation time over several years. Every effort has been made to structure the code in a readable and extendable format, however it should be noted that the analyst is not a professional developer and the code has not been peer-reviewed for compliance with a formal specification.”
  • Some handy functions, beyond stuff like base64 encoding:
  • Network Enumeration (CIDR to list of IPS)
  • Browser User Agent Parser (what browser is that, based on your HTTP logs)
  • XOR Brute Force: enter some XOR’d text, and try every possible key to find plaintext. Optionally give it a regex of known plaintext to find the right key.
  • Calculate the “Shannon Entropy” of the input (how random is this data)
  • It also has a number of built in regular expressions for common things, very useful
  • The project is up on github if you want to play with the code

Building Electron and VSCode in FreeBSD11

  • A patch and set of instructions for building Electron and VSCode on FreeBSD
  • “Visual Studio Code is a source code editor developed by Microsoft for Windows, Linux and macOS. It includes support for debugging, embedded Git control, syntax highlighting, intelligent code completion, snippets, and code refactoring. It is also customizable, so users can change the editor’s theme, keyboard shortcuts, and preferences. It is free and open-source, although the official download is under a proprietary license.”
  • “Visual Studio Code is based on Electron, a framework which is used to deploy Node.js applications for the desktop running on the Blink layout engine. Although it uses the Electron framework, the software is not a fork of Atom, it is actually based on Visual Studio Online’s editor (codename “Monaco”)”
  • It would be interesting to see official support for VSCode on FreeBSD
  • Has anyone tried VSCode on the FreeBSD Code base?

Beastie Bits


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post The Wayland Machine | BSD Now 179 first appeared on Jupiter Broadcasting.

]]>
Enjoy the Silence | BSD Now 178 https://original.jupiterbroadcasting.net/106451/enjoy-the-silence-bsd-now-178/ Thu, 26 Jan 2017 07:42:51 +0000 https://original.jupiterbroadcasting.net/?p=106451 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines Ports no longer build on EOL FreeBSD versions The FreeBSD ports tree has been updated to automatically fail if you try to compile ports […]

The post Enjoy the Silence | BSD Now 178 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

Ports no longer build on EOL FreeBSD versions

  • The FreeBSD ports tree has been updated to automatically fail if you try to compile ports on EOL versions of FreeBSD (any version of 9.x or earlier, 10.0 – 10.2, or 11 from before 11.0)
  • This is to prevent shooting yourself in the food, as the compatibility code for those older OSes has been removed now that they are no longer supported.
  • If you use pkg, you will also run into problems on old releases. Packages are always built on the oldest supported release in a branch. Until recently, this meant packages for 10.1, 10.2, and 10.3 were compiled on 10.1. Now that 10.1 and 10.2 are EOL, packages for 10.x are compiled on 10.3.
  • This matters because 10.3 supports the new openat() and various other *at() functions used by capsicum. Now that pkg and packages are built on a version that supports this new feature, they will not run on systems that do not support it. So pkg will exit with an error as soon as it tries to open a file.
  • You can work around this temporarily by using the pkg-static command, but you should upgrade to a supported release immediately.

Improving TrueOS: OpenRC

  • With TrueOS moving to a rolling-release model, we’ve decided to be a bit more proactive in sharing news about new features that are landing.
  • This week we’ve posted an article talking about the transition to OpenRC
  • In past episodes you’ve heard me mention OpenRC, but hopefully today we can help answer any of those lingering questions you may still have about it
  • The first thing always asked, is “What is OpenRC?”

OpenRC is a dependency-based init system working with the system provided init program. It is used with several Linux distributions, including Gentoo and Alpine Linux. However, OpenRC was created by the NetBSD developer Roy Marples in one of those interesting intersections of Linux and BSD development. OpenRC’s development history, portability, and 2-clause BSD license make its integration into TrueOS an easy decision.

  • Now that we know a bit about what it is, how does it behave differently than traditional RC?

TrueOS now uses OpenRC to manage all system services, as opposed to FreeBSD’s RC. Instead of using rc.d for base system rc scripts, OpenRC uses init.d. Also, every service in OpenRC has its own user configuration file, located in /etc/conf.d/ for the base system and /usr/local/etc.conf.d/ for ports. Finally, OpenRC uses runlevels, as opposed to the FreeBSD single- or multi- user modes. You can view the services and their runlevels by typing $ rc-update show -v in a CLI. Also, TrueOS integrates OpenRC service management into SysAdm with the Service Manager tool

  • One of the prime benefits of OpenRC is much faster boot-times, which is important in a portable world of laptops (and desktops as well). But service monitoring and crash detection are also important parts of what make OpenRC a substantial upgrade for TrueOS.

  • Lastly people have asked us about migration, what is done, what isn’t? As of now almost all FreeBSD base system services have been migrated over. In addition most desktop-facing services required to run Lumina and the like are also ported. We are still going through the ports tree and converting legacy rc.d scripts to init.d, but the process takes time. Several new folks have begun contributing OpenRC scripts and we hope to have all the roughly 1k ports converted over this year.


BSDRP Releases 1.70

  • A new release of the BSD Router Project
  • This distro is designed to replace high end routers, like those from Cisco and Juniper, with FreeBSD running on regular off-the-shelf server.
  • Highlights:
    • Upgraded to FreeBSD 11.0-STABLE r312663 (skip 11.0 for massive performance improvement)
    • Re-Added: netmap-fwd (https://github.com/Netgate/netmap-fwd)
    • Add FIBsync patch to netmap-fwd from Zollner Robert wolfit_ro@yahoo.com
    • netmap pkt-gen supports IPv6, thanks to Andrey V. Elsukov (ae@freebsd.org)
    • bird 1.6.3 (add BGP Large communities support)
    • OpenVPN 2.4.0 (adds the high speed AEAD GCM cipher)
  • All of the other packages have also been upgraded
  • A lot of great work has been done on BSDRP, and it has also generated a lot of great benchmarks and testing that have resulted in performance increases and improved understanding of how FreeBSD networking scales across different CPU types and speeds

DragonFlyBSD gets UEFI support

  • This commit adds support for UEFI to the Dragonfly Installer, allowing new systems to be installed to boot from UEFI
  • This script provides a way to build a HAMMER filesystem that works with UEFI
  • There is also a UEFI man page
  • The install media has also been updated to support booting from either UEFI or MBR, in the same way that the FreeBSD images work

News Roundup

The Rule of Silence

  • “The rule of silence, also referred to as the silence is golden rule, is an important part of the Unix philosophy that states that when a program has nothing surprising, interesting or useful to say, it should say nothing. It means that well-behaved programs should treat their users’ attention and concentration as being valuable and thus perform their tasks as unobtrusively as possible. That is, silence in itself is a virtue.”
  • This doesn’t mean a program cannot be verbose, it just means you have to ask it for the additional output, rather than having it by default
  • “There is no single, standardized statement of the Unix philosophy, but perhaps the simplest description would be: “Write programs that are small, simple and transparent. Write them so that they do only one thing, but do it well and can work together with other programs.” That is, the philosophy centers around the concepts of smallness, simplicity, modularity, craftsmanship, transparency, economy, diversity, portability, flexibility and extensibility.”
  • “This philosophy has been fundamental to the the fact that Unix-like operating systems have been thriving for more than three decades, far longer than any other family of operating systems, and can be expected to see continued expansion of use in the years to come”
  • “The rule of silence is one of the oldest and most persistent design rules of such operating systems. As intuitive as this rule might seem to experienced users of such systems, it is frequently ignored by the developers of other types of operating systems and application programs for them. The result is often distraction, annoyance and frustration for users.”
  • “There are several very good reasons for the rule of silence: (1) One is to avoid cluttering the user’s mind with information that might not be necessary or might not even be desired. That is, unnecessary information can be a distraction. Moreover, unnecessary messages generated by some operating systems and application programs are sometimes poorly worded, and can cause confusion or needless worry on the part of users.”
  • No news is good news. When there is bad news, error messages should be descriptive, and ideally tell the user what they might do about the error.
  • “A third reason is that command line programs (i.e., all-text mode programs) on Unix-like operating systems are designed to work together with pipes, i.e., the output from one program becomes the input of another program. This is a major feature of such systems, and it accounts for much of their power and flexibility. Consequently, it is important to have only the truly important information included in the output of each program, and thus in the input of the next program.”
  • Have you ever had to try to strip out useless output so you could feed that data into another program?
  • “The rule of silence originally applied to command line programs, because all programs were originally command line programs. However, it is just as applicable to GUI (graphical user interfaces) programs. That is, unnecessary and annoying information should be avoided regardless of the type of user interface.”
  • “A example is the useless and annoying dialog boxes (i.e., small windows) that pop up on the display screen with with surprising frequency on some operating systems and programs. These dialog boxes contain some obvious, cryptic or unnecessary message and require the user to click on them in order to close them and proceed with work. This is an interruption of concentration and a waste of time for most users. Such dialog boxes should be employed only in situations in which some unexpected result might occur or to protect important data.”
  • It goes on to make an analogy about Public Address systems. If too many unimportant messages, like advertisements, are sent over the PA system, people will start to ignore them, and miss the important announcements.

The Tao of tmux

  • An interesting article floated across my news feed a few weeks back. It’s what essentially boils down to a book called the “Tao of tmux”, which immediately piqued my interest.
  • My story may be similar to many of yours. I was initially raised on using screen, and screen only for my terminal session and multiplexing needs.
  • Since then I’ve only had a passing interest in tmux, but its always been one of those utilities I felt was worthy of investing some more time into. (Especially when seeing some of the neat setups some of my peers have with it)
  • Needless to say, this article has been bookmarked, and I’ve started digesting some of it, but thought it would be good to share with anybody else who finds them-self in a similar situation.
  • The book starts off well, explaining in the simplest terms possible what Tmux really is, by comparing and contrasting it to something we are all familiar with, GUIS!
  • Helpfully they also include a chart which explains some of the terms we will be using frequently when discussing tmux
  • One of the things the author does recommend is also making sure you are up to speed on your Terminal knowledge.

Before getting into tmux, a few fundamentals of the command line should be reviewed. Often, we’re so used to using these out of street smarts and muscle memory a great deal of us never see the relation of where these tools stand next to each other.

Seasoned developers are familiar with zsh, Bash, iTerm2, konsole, /dev/tty, shell scripting, and so on. If you use tmux, you’ll be around these all the time, regardless of whether you’re in a GUI on a local machine or SSH’ing into a remote server.

If you want to learn more about how processes and TTY’s work at the kernel level (data structures and all) the book The Design and Implementation of the FreeBSD Operating System (2nd Edition) by Marshall Kirk McKusick is nice. In particular, Chapter 4, Process Management and Section 8.6, Terminal Handling. The TTY demystified by Linus Åkesson (available online) dives into the TTY and is a good read as well.

  • We had to get that shout-out of Kirk’s book in here 😉
  • From here the boot/article takes us on a whirlwind journey of Sessions, Windows, Panes and more. Every control- command is covered, information on how to customize your statusbar, tips, tricks and the like. There’s far more here than we can cover in a single segment, but you are highly encouraged to bookmark this one and start your own adventure into the world of tmux.

SDF Celebrates 30 years of service in 2017

  • HackerNews thread on SDF
  • “Super Dimension Fortress (SDF, also known as freeshell.org) is a non-profit public access UNIX shell provider on the Internet. It has been in continual operation since 1987 as a non-profit social club. The name is derived from the Japanese anime series The Super Dimension Fortress Macross; the original SDF server was a BBS for anime fans. From its BBS roots, which have been well documented as part of the BBS: The Documentary project, SDF has grown into a feature-rich provider serving members around the world.”
  • A public access UNIX system, it was many people’s first access to a UNIX shell.
  • In the 90s, Virtual Machines were rare, the software to run them usually cost a lot of money and no one had very much memory to try to run two operating systems at the same time.
  • So for many people, these type of shell accounts were the only way they could access UNIX without having to replace the OS on their only computer
  • This is how I first started with UNIX, eventually moving to paying for access to bigger machines, and then buying my own servers and renting out shell accounts to host IRC servers and channel protection bots.
  • “On June 16th, 1987 Ted Uhlemann (handle: charmin, later iczer) connected his Apple ][e’s 300 baud modem to the phone line his mother had just given him for his birthday. He had published the number the night before on as many BBSes around the Dallas Ft. Worth area that he could and he waited for the first caller. He had a copy of Magic Micro BBS which was written in Applesoft BASIC and he named the BBS “SDF-1” after his favorite Japanimation series ROBOTECH (Macross). He hoped to draw users who were interested in anime, industrial music and the Church of the Subgenius.”
  • I too started out in the world of BBSes before I had access to the internet. My parents got my a dedicated phone line for my birthday, so I wouldn’t tie up their line all the time. I quickly ended up running my own BBS, the Sudden Death BBS (Renegade on MS DOS)
  • I credit this early experience for my discovery of a passion for Systems Administration, that lead me to my current career
  • “Slowly, SDF has grown over all these years, never forgetting our past and unlike many sites on the internet, we actually have a past. Some people today may come here and see us as outdated and “retro”. But if you get involved, you’ll see it is quite alive with new ideas and a platform for opportunity to try many new things. The machines are often refreshed, the quotas are gone, the disk space is expanding as are the features (and user driven features at that) and our cabinets have plenty of space for expansion here in the USA and in Europe (Germany).”
  • “Think about ways you’d like to celebrate SDF’s 30th and join us on the ‘bboard’ to discuss what we could do. I realize many of you have likely moved on yourselves, but I just wanted you to know we’re still here and we’ll keep doing new and exciting things with a foundation in the UNIX shell.”

Getting Minecraft to Run on NetBSD

  • One thing that doesn’t come up often on BSDNow is the idea of gaming. I realize most of us are server folks, or perhaps don’t play games (The PC is for work, use your fancy-smanzy PS4 and get off my lawn you kids)
  • Today I thought it would be fun to highlight this post over at Reddit talking about running MineCraft on NetBSD
  • Now I realize this may not be news to some of you, but perhaps it is to others. For the record my kids have been playing Minecraft on PC-BSD / TrueOS for years. It’s the primary reason they are more often booted into that instead of Windows. (Funny story behind that – Got sick of all the 3rd party mods, which more often than not came helpfully bundled with viruses and malware)
  • On NetBSD the process looks a bit different than on FreeBSD. First up, you’ll need to enable Linux Emulation and install Oracle JRE (Not OpenJDK, that path leads to sadness here)
  • The guide will then walk us through the process of fetching the Linux runtime packages, extracting and then enabling bits such as ‘procfs’ that is required to run the Linux binaries.
  • Once that’s done, minecraft is only a simple “oracle8-jre /path/to/minecraft.jar” command away from starting up, and you’ll be “crafting” in no time. (Does anybody even play survival anymore?)

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Enjoy the Silence | BSD Now 178 first appeared on Jupiter Broadcasting.

]]>
Getting Pi on my Wifi | BSD Now 177 https://original.jupiterbroadcasting.net/106301/getting-pi-on-my-wifi-bsd-now-177/ Thu, 19 Jan 2017 01:49:54 +0000 https://original.jupiterbroadcasting.net/?p=106301 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines WiFi: 11n hostap mode added to athn(4) driver, testers wanted “OpenBSD as WiFi access points look set to be making a comeback in the […]

The post Getting Pi on my Wifi | BSD Now 177 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

WiFi: 11n hostap mode added to athn(4) driver, testers wanted

  • “OpenBSD as WiFi access points look set to be making a comeback in the near future”
  • “Stefan Sperling added 802.11n hostap mode, with full support initially for the Atheros chips supported by the athn(4) driver.”
  • “Hostap performance is not perfect yet but should be no worse than 11a/b/g modes in the same environment.”
  • “For Linux clients a fix for WME params is needed which I also posted to tech@”
  • “This diff does not modify the known-broken and disabled ar9003 code, apart from making sure it still builds.”
  • “I’m looking for both tests and OKs.”

  • There has also been a flurry of work in FreeBSD on the ath10k driver, which supports 802.11ac

  • Like this one and this one

The long-awaited iocage update has landed

  • We’ve hinted at the new things happening behind the scenes with iocage, and this last week the code has made its first public debut.
  • So what’s changed you may ask. The biggest is that iocage has undergone a complete overhaul, moving from its original shell-base to python.
  • The story behind that is that the author (Brandon) works at iXsystems, and the plan is to move away from the legacy warden-based jail management which was also shell-based.
  • This new python re-write will allow it to integrate into FreeNAS (and other projects) better by exposing an API for all jail management tasks. Thats right, no more ugly CLI output parsing just to wrangle jail options either at creation or runtime.
  • But what about users who just run iocage manually from the CLI? No worries, the new iocage is almost identical to the original CLI usage, making the switch over very simple.
  • Just to re-cap, lets look at the new features list:
  • FEATURES:
    • Ease of use
    • Rapid jail creation within seconds
    • Automatic package installation
    • Virtual networking stacks (vnet)
    • Shared IP based jails (non vnet)
    • Transparent ZFS snapshot management
    • Export and import
  • The new iocage is available now via ports and packages under sysutils/py-iocage, give it a spin and be sure to report issues back to the developer(s).

Reading DHT11 temperature sensors on a Raspberry Pi under FreeBSD

  • “DHT-11 is a very cheap temperature/humidity sensor which is commonly used in the IoT devices. It is not very accurate, so for the accurate measurement i would recommend to use DHT21 instead. Anyway, i had DHT-11 in my tool box, so decided to start with it. DHT-11 using very simple 1 wire protocol – host is turning on chip by sending 18ms low signal to the data output and then reading 40 bytes of data.”
  • “To read data from the chip it should be connected to the power (5v) and gpio pin. I used pin 2 as VCC, 6 as GND and 11 as GPIO”
  • “There is no support for this device out of the box on FreeBSD. I found some sample code on the github, see lex/freebsd-gpio-dht11 repository. This code was a good starting point, but soon i found 2 issues with it:
    • Results are very unreliable, probably due to gpio decoding algorithm.
  • Checksum is not validated, so sometime values are bogus.
  • “Initially i was thinking to fix this myself, but later found kernel module for this purpose, 1 wire over gpio. This module contains DHT11 kernel driver (gpio_sw) which implements DHT-11 protocol in the kernel space and exporting /dev/sw0 for the userland. Driver compiles on FreeBSD11/ARM without any changes. Use make install to install the driver.”
  • The articles goes into how to install and configure the driver, including a set of devfs rules to allow non-root users to read from the sensor
  • “Final goal was to add this sensor to the domoticz software. It is using LUA scripting to extend it functionality, e.g. to obtain data from non-supported or non standard devices. So, i decided to read /dev/sw0 from the LUA.”
  • They ran into some trouble with LUA trying to read too much data at once, and had to work around it
  • In the end, they got the results and were able to use them in the monitoring tool

Tor-ified Home Network using HardenedBSD and a RPi3

  • Shawn from HardendBSD has posted an article up on GitHub talking about his deployment of a new Tor relay on a RPi3
  • This particular method was attractive, since it allows running a Relay, but without it being on a machine which may have personal data, such as SSH keys, files, etc
  • While his setup is done on HardendBSD, the same applies to a traditional FreeBSD setup as well.
  • First up, is the list of things needed for this project:
  1. Raspberry Pi 3 Model B Rev 1.2 (aka, RPI3)
  2. Serial console cable for the RPI3
  3. Belkin F4U047 USB Ethernet Dongle
  4. Insignia NS-CR2021 USB 2.0 SD/MMC Memory Card Reader
  5. 32GB SanDisk Ultra PLUS MicroSDHC
  6. A separate system, running FreeBSD or HardenedBSD
  7. HardenedBSD clang 4.0.0 image for the RPI3
  8. An external drive to be formatted
  9. A MicroUSB cable to power the RPI3
  10. Two network cables
  11. Optional: Edimax N150 EW-7811Un Wireless USB
  12. Basic knowledge of vi
  • After getting HBSD running on the RPi3 and serial connection established, he then takes us through the process of installing and enabling the various services needed. (Don’t forget to growfs your sdcard first!)
  • Now the tricky part is that some of the packages needed to be compiled from ports, which is somewhat time-consuming on a RPi. He strongly recommends not compiling on the sdcard (it sounds like personal experience has taught him well) and to use iscsi or some external USB drive.
  • With the compiling done, our package / software setup is nearly complete. Next up is firewalling the box, which he helpfully provides a full PF config setup that we can copy-n-paste here.
  • The last bits will be enabling the torrc configuration knobs, which if you follow his example again, will result in a tor public relay, and a local transparent proxy for you.
  • Bonus! Shawn helpfully provides DHCPD configurations, and even Wireless AP configurations, if you want to setup your RPi3 to proxy for devices that connect to it.

News Roundup

Unix Admin. Horror Story Summary, version 1.0

  • A great collection of stories, many of which will ring true with our viewers
  • The very first one, is about a user changing root’s shell to /usr/local/bin/tcsh but forgetting to make it executable, resulting in not being able to login as root.
  • I too have run into this issue, in a slightly different way. I had tcsh as my user shell (back before tcsh was in base), and after a major OS upgrade, but before I had a chance to recompile all of my ports. Now I couldn’t ssh in to the remote machine in order to recompile my shell. Now I always use a shell included in the base system, and test it before rebooting after an upgrade.
  • “Our operations group, a VMS group but trying to learn UNIX, was assigned account administration. They were cleaning up a few non-used accounts like they do on VMS – backup and purge. When they came across the account “sccs”, which had never been accessed, away it went. The “deleteuser” utility from DEC asks if you would like to delete all the files in the account. Seems reasonable, huh? Well, the home directory for “sccs” is “/”. Enough said :-(“
  • “I was working on a line printer spooler, which lived in /etc. I wanted to remove it, and so issued the command “rm /etc/lpspl.” There was only one problem. Out of habit, I typed “passwd” after “/etc/” and removed the password file. Oops.”
  • I’ve done things like this as well. Finger memory can be dangerous
  • “I was happily churning along developing something on a Sun workstation, and was getting a number of annoying permission denieds from trying to write into a directory heirarchy that I didn’t own. Getting tired of that, I decided to set the permissions on that subtree to 777 while I was working, so I wouldn’t have to worry about it. Someone had recently told me that rather than using plain “su”, it was good to use “su -“, but the implications had not yet sunk in. (You can probably see where this is going already, but I’ll go to the bitter end.) Anyway, I cd’d to where I wanted to be, the top of my subtree, and did su -. Then I did chmod -R 777. I then started to wonder why it was taking so damn long when there were only about 45 files in 20 directories under where I (thought) I was. Well, needless to say, su – simulates a real login, and had put me into root’s home directory, /, so I was proceeding to set file permissions for the whole system to wide open. I aborted it before it finished, realizing that something was wrong, but this took quite a while to straighten out.”
  • Where is a ZFS snapshot when you need it?

How individual contributors get stuck

  • An interesting post looking at the common causes of people getting stuck when trying to create or contribute new code
    • Brainstorming/architecture: “I must have thought through all edge cases of all parts of everything before I can begin this project”
    • Researching possible solutions forever (often accompanied by desire to do a “bakeoff” where they build prototypes in different platforms/languages/etc)
    • Refactoring: “this code could be cleaner and everything would be just so much easier if we cleaned this up… and this up… and…”
    • Helping other people instead of doing their assigned tasks (this one isn’t a bad thing in an open source community)
    • Working on side projects instead of the main project (it is your time, it is up to you how to spend it)
    • Excessive testing (rare)
    • Excessive automation (rare)
    • Finish the last 10–20% of a project
    • Start a project completely from scratch
    • Do project planning (You need me to write what now? A roadmap?) (this is why FreeBSD has devsummits, some things you just need to whiteboard)
    • Work with unfamiliar code/libraries/systems
    • Work with other teams (please don’t make me go sit with data engineering!!)
    • Talk to other people
    • Ask for help (far beyond the point they realized they were stuck and needed help)
    • Deal with surprises or unexpected setbacks
    • Deal with vendors/external partners
    • Say no, because they can’t seem to just say no (instead of saying no they just go into avoidance mode, or worse, always say yes)
  • “Noticing how people get stuck is a super power, and one that many great tech leads (and yes, managers) rely on to get big things done. When you know how people get stuck, you can plan your projects to rely on people for their strengths and provide them help or even completely side-step their weaknesses. You know who is good to ask for which kinds of help, and who hates that particular challenge just as much as you do.”
  • “The secret is that all of us get stuck and sidetracked sometimes. There’s actually nothing particularly “bad” about this. Knowing the ways that you get hung up is good because you can choose to either a) get over the fears that are sticking you (lack of knowledge, skills, or confidence), b) avoid such tasks as much as possible, and/or c) be aware of your habits and use extra diligence when faced with tackling these areas.”

Make Docs!

  • “MkDocs is a fast, simple and downright gorgeous static site generator that’s geared towards building project documentation. Documentation source files are written in Markdown, and configured with a single YAML configuration file.”
  • “MkDocs builds completely static HTML sites that you can host on GitHub pages, Amazon S3, or anywhere else you choose”
  • It is an easy to install python package
  • It includes a server mode that auto-refreshes the page as you write the docs, making it easy to preview your work before you post it online
  • Everything needs docs, and writing docs should be as simple as possible, so that more of them will get written
  • Go write some docs!

Experimental FreeNAS 11/12 builds

  • We know there’s a lot of FreeNAS users who listen to BSDNow, so I felt it was important to share this little tidbit.
  • I’ve posted something to the forums last night which includes links to brand-new spins of FreeNAS 9.10 based upon FreeBSD 11/stable and 12/current.
  • These builds are updated nightly via our Jenkins infrastructure and hopefully will provide a new playground for technical folks and developers to experiment with FreeBSD features in their FreeNAS environment, long before they make it into a -STABLE release.
  • As usual, the notes of caution do apply, these are nightlies, and as such bugs will abound. Do NOT use these with your production data, unless you are crazy, or just want an excuse to test your backup strategy
  • If you do run these builds, of course feedback is welcome via the usual channels, such as the bug tracker.
  • The hope is that by testing FreeBSD code earlier, we can vet and determine what is safe / ready to go into mainline FreeNAS sooner rather than later.

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Getting Pi on my Wifi | BSD Now 177 first appeared on Jupiter Broadcasting.

]]>
Linking your world | BSD Now 176 https://original.jupiterbroadcasting.net/106146/linking-your-world-bsd-now-176/ Thu, 12 Jan 2017 04:33:09 +0000 https://original.jupiterbroadcasting.net/?p=106146 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines FreeBSD Kernel and World, and many Ports, can now be linked with lld “With this change applied I can link the entirety of the […]

The post Linking your world | BSD Now 176 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

FreeBSD Kernel and World, and many Ports, can now be linked with lld

  • “With this change applied I can link the entirety of the FreeBSD/amd64 base system (userland world and kernel) with LLD.”
  • “Rafael’s done an initial experimental Poudriere FreeBSD package build with lld head, and found almost 20K out of 26K ports built successfully. I’m now looking at getting CI running to test this on an ongoing basis. But, I think we’re at the point where an experimental build makes sense.”
  • Such testing will become much easier once llvm 4.0 is imported into -current
  • “I suggest that during development we collect patches in a local git repo — for example, I’ve started here for my Poudriere run https://github.com/emaste/freebsd-ports/commits/ports-lld”
  • “It now looks like libtool is responsible for the majority of my failed / skipped ports. Unless we really think we’ll add “not GNU” and other hacks to lld we’re going to have to address libtool limitations upstream and in the FreeBSD tree. I did look into libtool a few weeks ago, but unfortunately haven’t yet managed to produce a patch suitable for sending upstream.”
  • If you are interested in LLVM/Clang/LLD/LLDB etc, check out: A Tourist’s Guide to the LLVM Source Code

Documenting NetBSD’s scheduler tweaks

  • A followup to our previous coverage of improvements to the scheduler in NetBSD
  • “NetBSD’s scheduler was recently changed to better distribute load of long-running processes on multiple CPUs. So far, the associated sysctl tweaks were not documented, and this was changed now, documenting the kern.sched sysctls.”
  • kern.sched.cacheht_time (dynamic): Cache hotness time in which a LWP is kept on one particular CPU and not moved to another CPU. This reduces the overhead of flushing and reloading caches. Defaults to 3ms. Needs to be given in “hz” units, see mstohz(9).
  • kern.sched.balance_period (dynamic): Interval at which the CPU queues are checked for re-balancing. Defaults to 300ms.
  • kern.sched.min_catch (dynamic): Minimum count of migratable (runable) threads for catching (stealing) from another CPU. Defaults to 1 but can be increased to decrease chance of thread migration between CPUs.
  • It is important to have good documentation for these tunables, so that users can understand what it is they are adjusting

FreeBSD Network Gateway on EdgeRouter Lite

  • “EdgeRouter Lite is a great device to run at the edge of a home network. It becomes even better when it’s running FreeBSD. This guide documents how to setup such a gateway. There are accompanying git repos to somewhat automate the process as well.”
  • “Colin Percival has written a great blog post on the subject, titled FreeBSD on EdgeRouter Lite – no serial port required . In it he provides and describes a shell script to build a bootable image of FreeBSD to be run on ERL, available from GitHub in the freebsd-ERL-build repo. I have built a Vagrant-based workflow to automate the building of the drive image. It’s available on GitHub in the freebsd-edgerouterlite-ansible repo. It uses the build script Percival wrote.”
  • “Once you’ve built the disk image it’s time to write it to a USB drive. There are two options: overwrite the original drive in the ERL or buy a new drive. I tried the second option first and wrote to a new Sandrive Ultra Fit 32GB USB 3.0 Flash Drive (SDCZ43-032G-GAM46). It did not work and I later found on some blog that those drives do not work. I have not tried another third party drive since.”
  • The tutorial covers all of the steps, and the configuration files, including rc.conf, IP configuration, DHCP (and v6), pf, and DNS (unbound)
  • “I’m pretty happy with ERL and FreeBSD. There is great community documentation on how to configure all the pieces of software that make a FreeBSD-based home network gateway possible. I can tweak things as needed and upgrade when newer versions become available.”
  • “My plan on upgrading the base OS is to get a third party USB drive that works, write a newer FreeBSD image to it, and replace the drive in the ERL enclosure. This way I can keep a bunch of drives in rotation. Upgrades to newer builds or reverts to last known good version are as easy as swapping USB drives.”
  • Although something more nanobsd style with 2 partitions on the one drive might be easier.
  • “Configuration with Ansible means I don’t have to manually do things again and again. As the configs change they’ll be tracked in git so I get version control as well. ERL is simply a great piece of network hardware. I’m tempted to try Ubiquiti’s WiFi products instead of a mixture of DD-WRT and OpenWRT devices I have now. But that is for another day and perhaps another blog post.”

A highly portable build system targeting modern UNIX systems

  • An exciting new/old project is up on GitHub that we wanted to bring your attention to.
  • BSD Owl is a highly portable build-system based around BSD Make that supports a variety of popular (and not so popular) languages, such as:

    • C programs, compiled for several targets
    • C libraries, static and shared, compiled for several targets
    • Shell scripts
    • Python scripts
    • OCaml programs
    • OCaml libraries, with ocamldoc documentation
    • OCaml plugins
    • TeX documents, prepared for several printing devices
    • METAPOST figures, with output as PDF, PS, SVG or PNG, either as part of a TeX document or as standalone documents
  • What about features you may ask? Well BSD Owl has plenty of those to go around:

    • Support of compilation profiles
    • Support of the parallel mode (at the directory level)
    • Support of separate trees for sources and objects
    • Support of architecture-dependant compilation options
    • Support GNU autoconf
    • Production of GPG-signed tarballs
    • Developer subshell, empowered with project-specific scripts
    • Literate programming using noweb
    • Preprocessing with m4
  • As far as platform support goes, BSD Owl is tested on OSX / Debian Jesse and FreeBSD > 9. Future support for OpenBSD and NetBSD is planned, once they update their respective BSD Make binaries to more modern versions


News Roundup

find -delete in OpenBSD. Thanks to tedu@ OpenBSD will have this very handy flag to in the future.

  • OpenBSD’s find(1) utility will now support the -delete operation
  • “This option is not posix (not like that’s stopped find accumulating a dozen extensions), but it is in gnu and freebsd (for 20 years). it’s also somewhat popular among sysadmins and blogs, etc. and perhaps most importantly, it nicely solves one of the more troublesome caveats of find (which the man page actually covers twice because it’s so common and easy to screw up). So I think it makes a good addition.”
  • The actual code was borrowed from FreeBSD
  • Using the -delete option is much more performant than forking rm once for each file, and safer because there is no risk of mangling path names
  • If you encounter a system without a -delete option, your best bet is to use the -print0 option of find, which will print each filename terminated by a null byte, and pipe that into xargs -0 rm
  • This avoids any ambiguity caused by files with spaces in the names

New version of the Lumina desktop released

  • Just in time to kickoff 2017 we have a new release of Lumina Desktop (1.2.0)
  • Some of the notable changes include fixes to make it easier to port to other platforms, and some features:

  • New Panel Plugins:

    • “audioplayer” (panel version of the desktop plugin with the same name): Allows the user to load/play audio files directly through the desktop itself.
    • “jsonmenu” (panel version of the menu plugin with the same name): Allows an external utility/script to be used to generate a menu/contents on demand.
  • New Menu Plugins:
    • “lockdesktop”: Menu option for instantly locking the desktop session.
  • New Utilities:

    • lumina-archiver: This is a pure Qt5 front-end to the “tar” utility for managing/creating archives. This can also use the dd utility to burn a “*.img” file to a USB device for booting.“
  • Looks like the news already made its rounds to a few different sites, with Phoronix and Softpedia picking it up as well

  • Phoronix
  • Softpedia
  • TrueOS users running the latest updates are already on the pre-release version of 1.2.1, so nothing has to be done there to get the latest and greatest.

dd is not a disk writing tool

  • “If you’ve ever used dd, you’ve probably used it to read or write disk images:”
  • # Write myfile.iso to a USB drive
    dd if=myfile.iso of=/dev/sdb bs=1M

  • “Usage of dd in this context is so pervasive that it’s being hailed as the magic gatekeeper of raw devices. Want to read from a raw device? Use dd. Want to write to a raw device? Use dd. This belief can make simple tasks complicated. How do you combine dd with gzip? How do you use pv if the source is raw device? How do you dd over ssh?”
  • “The fact of the matter is, dd is not a disk writing tool. Neither “d” is for “disk”, “drive” or “device”. It does not support “low level” reading or writing. It has no special dominion over any kind of device whatsoever.”
  • Then a number of alternatives are discussed
  • “However, this does not mean that dd is useless! The reason why people started using it in the first place is that it does exactly what it’s told, no more and no less. If an alias specifies -a, cp might try to create a new block device rather than a copy of the file data. If using gzip without redirection, it may try to be helpful and skip the file for not being regular. Neither of them will write out a reassuring status during or after a copy.”
  • “dd, meanwhile, has one job*: copy data from one place to another. It doesn’t care about files, safeguards or user convenience. It will not try to second guess your intent, based on trailing slashes or types of files. When this is no longer a convenience, like when combining it with other tools that already read and write files, one should not feel guilty for leaving dd out entirely.”
  • “dd is the swiss army knife of the open, read, write and seek syscalls. It’s unique in its ability to issue seeks and reads of specific lengths, which enables a whole world of shell scripts that have no business being shell scripts. Want to simulate a lseek+execve? Use dd! Want to open a file with O_SYNC? Use dd! Want to read groups of three byte pixels from a PPM file? Use dd!”
  • “It’s a flexible, unique and useful tool, and I love it. My only issue is that, far too often, this great tool is being relegated to and inappropriately hailed for its most generic and least interesting capability: simply copying a file from start to finish.”
  • “dd actually has two jobs: Convert and Copy. Legend has it that the intended name, “cc”, was taken by the C compiler, so the letters were shifted by one to give “dd”. This is also why we ended up with a Window system called X.”
  • dd countdown

Bhyve setup for tcp testing

  • FreeBSD Developer Hiren Panchasara writes about his setup to use bhyve to test changes to the TCP stack in FreeBSD
  • “Here is how I test simple FreeBSD tcp changes with dummynet on bhyve. I’ve already wrote down how I do dummynet so I’ll focus on bhyve part.”
  • “A few months back when I started looking into improving FreeBSD TCP’s response to packet loss, I looked around for traffic simulators which can do deterministic packet drop for me.”
  • “I had used dummynet(4) before so I thought of using it but the problem is that it only provided probabilistic drops. You can specify dropping 10% of the total packets”
  • So he wrote a quick hack, hopefully he’ll polish it up and get it committed
  • “Setup: I’ll create 3 bhyve guests: client, router and server”
  • “Both client and server need their routing tables setup correctly so that they can reach each other. The Dummynet node is the router / traffic shaping node here. We need to enable forwarding between interfaces: sysctl net.inet.ip.forwarding=1”
  • “We need to setup links (called ‘pipes’) and their parameters on dummynet node”
  • “For simulations, I run a lighttpd web-server on the server which serves different sized objects and I request them via curl or wget from the client. I have tcpdump running on any/all of four interfaces involved to observe traffic and I can see specified packets getting dropped by dummynet. sysctl net.inet.ip.dummynet.io_pkt_drop is incremented with each packet that dummynet drops.”
  • “Here, 192.* addresses are for ssh and 10.* are for guests to be able to communicate within themselves.”
  • Create 2 tap interfaces for each end point, and 3 from the router. One each for SSH/control, and the others for the test flows. Then create 3 bridges, the first includes all of the control tap interfaces, and your hosts’ real interface. This allows the guests to reach the internet to download packages etc. The other two bridges form the connections between the three VMs
  • The creation and configuration of the VMs is documented in detail
  • I used a setup very similar to this for teaching the basics of how TCP works when I was teaching at a local community college

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Linking your world | BSD Now 176 first appeared on Jupiter Broadcasting.

]]>
How the Dtrace saved Christmas | BSD Now 175 https://original.jupiterbroadcasting.net/105921/how-the-dtrace-saved-christmas-bsd-now-175/ Thu, 05 Jan 2017 02:07:15 +0000 https://original.jupiterbroadcasting.net/?p=105921 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines OpenSSL 1.1 API migration path, or the lack thereof As many of you will already be aware, the OpenSSL 1.1.0 release intentionally introduced significant […]

The post How the Dtrace saved Christmas | BSD Now 175 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

OpenSSL 1.1 API migration path, or the lack thereof

As many of you will already be aware, the OpenSSL 1.1.0 release intentionally introduced significant API changes from the previous release. In summary, a large number of data structures that were previously publically visible have been made opaque, with accessor functions being added in order to get and set some of the fields within these now opaque structs. It is worth noting that the use of opaque data structures is generally beneficial for libraries, since changes can be made to these data structures without breaking the ABI. As such, the overall direction of these changes is largely reasonable.

However, while API change is generally necessary for progression, in this case it would appear that there is NO transition plan and a complete disregard for the impact that these changes would have on the overall open source ecosystem.

So far it seems that the only approach is to place the migration burden onto each and every software project that uses OpenSSL, pushing significant code changes to each project that migrates to OpenSSL 1.1, while maintaining compatibility with the previous API. This is forcing each project to provide their own backwards compatibility shims, which is practically guaranteeing that there will be a proliferation of variable quality implementations; it is almost a certainty that some of these will contain bugs, potentially introducing security issues or memory leaks.

  • I think this will be a bigger issue for other operating systems that do not have the flexibility of the ports tree to deliver a newer version of OpenSSL. If a project switches from the old API to the new API, and the OS only provides the older branch of OpenSSL, how can the application work?
  • Of course, this leaves the issue, if application A wants OpenSSL 1.0, and application B only works with OpenSSL 1.1, how does that work?

Due to a number of factors, software projects that make use of OpenSSL cannot simply migrate to the 1.1 API and drop support for the 1.0 API – in most cases they will need to continue to support both. Firstly, I am not aware of any platform that has shipped a production release with OpenSSL 1.1 – any software that supported OpenSSL 1.1 only, would effectively be unusable on every platform for the time being. Secondly, the OpenSSL 1.0.2 release is supported until the 31st of December 2019, while OpenSSL 1.1.0 is only supported until the 31st of August 2018 – any LTS style release is clearly going to consider shipping with 1.0.2 as a result.

Platforms that are attempting to ship with OpenSSL 1.1 are already encountering significant challenges – for example, Debian currently has 257 packages (out of 518) that do not build against OpenSSL 1.1. There are also hidden gotchas for situations where different libraries are linked against different OpenSSL versions and then share OpenSSL data structures between them – many of these problems will be difficult to detect since they only fail at runtime.

  • It will be interesting to see what happens with OpenSSL, and LibreSSL
  • Hopefully, most projects will decide to switch to the cleaner APIs provided by s2n or libtls, although they do not provide the entire functionality of the OpenSSL API.
  • Hacker News comments

exfiltration via receive timing

Another similar way to create a backchannel but without transmitting anything is to introduce delays in the receiver and measure throughput as observed by the sender. All we need is a protocol with transmission control. Hmmm. Actually, it’s easier (and more reliable) to code this up using a plain pipe, but the same principle applies to networked transmissions.

For every digit we want to “send” back, we sleep a few seconds, then drain the pipe. We don’t care about the data, although if this were a video file or an OS update, we could probably do something useful with it.

Continuously fill the pipe with junk data. If (when) we block, calculate the difference between before and after. This is a our secret backchannel data. (The reader and writer use different buffer sizes because on OpenBSD at least, a writer will stay blocked even after a read depending on the space that opens up. Even simple demos have real world considerations.)

In this simple example, the secret data (argv) is shared by the processes, but we can see that the writer isn’t printing them from its own address space. Nevertheless, it works.

Time to add random delays and buffering to firewalls? Probably not.

  • An interesting thought experiment that shows just how many ways there are to covertly convey a message

OpenBSD Desktop in about 30 Minutes

  • Over at hackernews we have a very non-verbose, but handy guide to getting to a OpenBSD desktop in about 30 minutes!
  • First, the guide will assume you’ve already installed OpenBSD 6.0, so you’ll need to at least be at the shell prompt of your freshly installed system to begin.
  • With that, now its time to do some tuning. Editing some resource limits in login.conf will be our initial task, upping some datasize tunables to 2GB
  • Next up, we will edit some of the default “doas” settings to something a bit more workable for desktop computing
  • Another handy trick, editing your .profile to have your PKG_PATH variables set automatically will make
  • One thing some folks may overlook, but disabling atime can speed disk performance (which you probably don’t care about atime on your desktop anyway), so this guide will show you what knobs to tweak in /etc/fstab to do so
  • After some final WPA / Wifi configuration, we then drop to “mere mortal” mode and begin our package installations. In this particular guide, he will be setting up Lumina Desktop (Which yes, it is on OpenBSD)
  • A few small tweaks later for xscreensaver and your xinitrc file, then you are ready to run “startx” and begin your desktop session!
  • All in all, great guide which if you are fast can probably be done in even less than 30 minutes and will result in a rock-solid OpenBSD desktop rocking Lumina none-the-less.

How DTrace saved Christmas

  • Adam Leventhal, one of the co-creators of DTrace, wrote up this post about how he uses DTrace at home, to save Christmas

I had been procrastinating making the family holiday card. It was a combination of having a lot on my plate and dreading the formulation of our annual note recapping the year; there were some great moments, but I’m glad I don’t have to do 2016 again. It was just before midnight and either I’d make the card that night or leave an empty space on our friends’ refrigerators.

Adobe Illustrator had other ideas: “Unable to set maximum number of files to be opened”

I’m not the first person to hit this. The problem seems to have existed since CS6 was released in 2016. None of the solutions were working for me, and — inspired by Sara Mauskopf’s excellent post — I was rapidly running out of the time bounds for the project. Enough; I’d just DTrace it.

A colleague scoffed the other day, “I mean, how often do you actually use DTrace?” In his mind DTrace was for big systems, critical system, when dollars and lives were at stake. My reply: I use DTrace every day. I can’t imagine developing software without DTrace, and I use it when my laptop (not infrequently) does something inexplicable (I’m forever grateful to the Apple team that ported it to Mac OS X)

Illustrator is failing on setrlimit(2) and blowing up as result. Let’s confirm that it is in fact returning -1:

$ sudo dtrace -n ‘syscall::setrlimit:return/execname == “Adobe Illustrato”/{ printf(“%d %d”, arg1, errno); }’
dtrace: description ‘syscall::setrlimit:return’ matched 1 probe
CPU ID FUNCTION:NAME
0 532 setrlimit:return -1 1

There it is. And setrlimit(2) is failing with errno 1 which is EPERM (value too high for non-root user). I already tuned up the files limit pretty high. Let’s confirm that it is in fact setting the files limit and check the value to which it’s being set. To write this script I looked at the documentation for setrlimit(2) (hooray for man pages!) to determine that the position of the resource parameter (arg0) and the type of the value parameter (struct rlimit). I needed the DTrace copyin() subroutine to grab the structure from the process’s address space:

$ sudo dtrace -n ‘syscall::setrlimit:entry/execname == “Adobe Illustrato”/{ this->r = *(struct rlimit *)copyin(arg1, sizeof (struct rlimit)); printf(“%x %x %x”, arg0, this->r.rlim_cur, this->r.rlim_max); }’

dtrace: description ‘syscall::setrlimit:entry’ matched 1 probe
CPU ID FUNCTION:NAME
0 531 setrlimit:entry 1008 2800 7fffffffffffffff

Looking through /usr/include/sys/resource.h we can see that 1008 corresponds to the number of files (RLIMIT_NOFILE | _RLIMIT_POSIX_FLAG)

The quickest solution was to use DTrace again to whack a smaller number into that struct rlimit. Easy:

$ sudo dtrace -w -n ‘syscall::setrlimit:entry/execname == “Adobe Illustrato”/{ this->i = (rlim_t *)alloca(sizeof (rlim_t)); *this->i = 10000; copyout(this->i, arg1 + sizeof (rlim_t), sizeof (rlim_t)); }’

dtrace: description ‘syscall::setrlimit:entry’ matched 1 probe
dtrace: could not enable tracing: Permission denied

Oh right. Thank you SIP (System Integrity Protection). This is a new laptop (at least a new motherboard due to some bizarre issue) which probably contributed to Illustrator not working when once it did. Because it’s new I haven’t yet disabled the part of SIP that prevents you from using DTrace on the kernel or in destructive mode (e.g. copyout()). It’s easy enough to disable, but I’m reboot-phobic — I hate having to restart my terminals — so I went to plan B: lldb

  • After using DTrace to get the address of the setrlimit function, Adam used lldb to change the result before it got back to the application:

(lldb) break set -n _init
Breakpoint 1: 47 locations.
(lldb) run

(lldb) di -s 0x1006e5b72 -c 1
0x1006e5b72: callq 0x1011628e0 ; symbol stub for: setrlimit
(lldb) memory write 0x1006e5b72 0x31 0xc0 0x90 0x90 0x90
(lldb) di -s 0x1006e5b72 -c 4
0x1006e5b72: xorl %eax, %eax
0x1006e5b74: nop
0x1006e5b75: nop
0x1006e5b76: nop

Next I just did a process detach and got on with making that holiday card…

DTrace was designed for solving hard problems on critical systems, but the need to understand how systems behave exists in development and on consumer systems. Just because you didn’t write a program doesn’t mean you can’t fix it.


News Roundup

Say my Blog’s name!

  • Brian Everly over at functionally paranoid has a treat for us today. Let us give you a moment to get the tin-foil hats on… Ok, done? Let’s begin!
  • He starts off with a look at physical security. He begins by listing your options:

    1. BIOS passwords – Not something I’m typically impressed with. Most can be avoided by opening up the machine, closing a jumper and powering it up to reset the NVRAM to factory defaults. I don’t even bother with them.
    2. Full disk encryption – This one really rings my bell in a positive way. If you can kill power to the box (either because the bad actor has to physically steal it and they aren’t carrying around a pile of car batteries and an inverter or because you can interrupt power to it some other way), then the disk will be encrypted. The other beauty of this is that if a drive fails (and they all do eventually) you don’t have to have any privacy concerns about chucking it into an electronics recycler (or if you are a bad, bad person, into a landfill) because that data is effectively gibberish without the key (or without a long time to brute force it).
    3. Two factor auth for logins – I like this one as well. I’m not a fan of biometrics because if your fingerprint is compromised (yes, it can happen – read about the department of defense background checks that were extracted by a bad agent – they included fingerprint images) you can’t exactly send off for a new finger. Things like the YubiKey are pretty slick. They require that you have the physical hardware key as well as the password so unless the bad actor lifted your physical key, they would have a much harder time with physical access to your hardware.
  • Out of those options, Brian mentions that he uses disk encryption and yubi-key for all his secure network systems.

  • Next up is network segmentation, in this case the first thing to do is change your admin password for any ISP supplied modem / router. He goes on to scare us of javascript attacks being used not against your local machine, but instead non WAN exposed router admin interface. Scary Stuff!
  • For added security, naturally he firewalls the router by plugging in the LAN port to a OpenBSD box which does the 2nd layer of firewall / router protection.
  • What about privacy and browsing? Here’s some more of his tips:

I use Unbound as my DNS resolver on my local network (with all UDP port 53 traffic redirected to it by pf so I don’t have to configure anything on the clients) and then forward the traffic to DNSCrypt Proxy, caching the results in Unbound. I notice ZERO performance penalty for this and it greatly enhances privacy. This combination of Unbound and DNSCrypt Proxy works very well together. You can even have redundancy by having multiple upstream resolvers running on different ports (basically run the DNSCrypt Proxy daemon multiple times pointing to different public resolvers).

I also use Firefox exclusively for my web browsing. By leveraging the tips on this page, you can lock it down to do a great job of privacy protection. The fact that your laptop’s battery drain rate can be used to fingerprint your browser completely trips me out but hey – that’s the world we live in.’

  • What about the cloud you may ask? Well Brian has a nice solution for that as well:

I recently decided I would try to live a cloud-free life and I’ll give you a bit of a synopsis on it. I discovered a wonderful Open Source project called FreeNAS. What this little gem does is allow you to install a FreeBSD/zfs file server appliance on amd64 hardware and have a slick administrative web interface for managing it. I picked up a nice SuperMicro motherboard and chassis that has 4 hot swap drive bays (and two internal bays that I used to mirror the boot volume on) and am rocking the zfs lifestyle! (Thanks Alan Jude!)

One of the nicest features of the FreeNAS is that it provides the ability to leverage the FreeBSD jail functionality in an easy to use way. It also has plugins but the security on those is a bit sketchy (old versions of libraries, etc.) so I decided to roll my own. I created two jails – one to run OwnCloud (yeah, I know about NextCloud and might switch at some point) and the other to run a full SMTP/IMAP email server stack. I used Lets Encrypt to generate the SSL certificates and made sure I hit an A on SSLLabs before I did anything else.

  • His post then goes in to talk about Backups and IoT devices, something else you need to consider in this truely paranoid world we are forced to live in. We even get a nice shout-out near the end!

Enter TarSnap – a company that advertises itself as “Online Backups for the Truly Paranoid”. It brings a tear to my eye – a kindred spirit! 🙂 Thanks again to Alan Jude and Kris Moore from the BSD Now podcast for turning me onto this company. It has a very easy command syntax (yes, it isn’t a GUI tool – suck it up buttercup, you wanted to learn the shell didn’t you?) and even allows you to compile the thing from source if you want to.”

  • We’ve only covered some of the highlights here, but you really should take a few moments of your time today and read this top to bottom. Lots of good tips here, already thinking how I can secure my home network better.

The open source book: “Producing Open Source Software”

  • “How to Run a Successful Free Software Project” by Karl Fogel
  • 9 chapters and over 200 pages of content, plus many appendices
  • Some interesting topics include:
    • Choosing a good name
    • version control
    • bug tracking
    • creating developer guidelines
    • setting up communications channels
    • choosing a license (although this guide leans heavily towards the GPL)
    • setting the tone of the project
    • joining or creating a Non-Profit Organization
    • the economics of open source
    • release engineering, packaging, nightly builds, etc
    • how to deal with forks
  • A lot of good information packaged into this ebook
  • This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License

DTrace Flamegraphs for node.js on FreeBSD

  • One of the coolest tools built on top of DTrace is flamegraphs
  • They are a very accurate, and visual way to see where a program is spending its time, which can tell you why it is slow, or where it could be improved. Further enhancements include off-cpu flame graphs, which tell you when the program is doing nothing, which can also be very useful
    > Recently BSD UNIXes are being acknowledged by the application development community as an interesting operating system to deploy to. This is not surprising given that FreeBSD had jails, the original container system, about 17 years ago and a lot of network focused businesses such as netflix see it as the best way to deliver content. This developer interest has led to hosting providers supporting FreeBSD. e.g. Amazon, Azure, Joyent and you can get a 2 months free instance at Digital Ocean.

DTrace is another vital feature for anyone who has had to deal with production issues and has been in FreeBSD since version 9. As of FreeBSD 11 the operating system now contains some great work by Fedor Indutny so you can profile node applications and create flamegraphs of node.js processes without any additional runtime flags or restarting of processes.

  • This is one of the most important things about DTrace. Many applications include some debugging functionality, but they require that you stop the application, and start it again in debugging mode. Some even require that you recompile the application in debugging mode.
  • Being able to attach DTrace to an application, while it is under load, while the problem is actively happening, can be critical to figuring out what is going on.
  • In order to configure your FreeBSD instance to utilize this feature make the following changes to the configuration of the server.

    • Load the DTrace module at boot
    • Increase some DTrace limits
    • Install node with the optional DTrace feature compiled in
    • Follow the generic node.js flamegraph tutorial
      > I hope you find this article useful. The ability to look at a runtime in this manor has saved me twice this year and I hope it will save you in the future too. My next post on freeBSD and node.js will be looking at some scenarios on utilising the ZFS features.
  • Also check out Brendan Gregg’s ACM Queue Article “The Flame Graph: This visualization of software execution is a new necessity for performance profiling and debugging”


SSHGuard 2.0 Call for Testing

  • SSHGuard is a tool for monitoring brute force attempts and blocking them
  • It has been a favourite of mine for a while because it runs as a pipe from syslogd, rather than reading the log files from the disk

A lot of work to get SSHGuard working with new log sources (journalctl, macOS log) and backends (firewalld, ipset) has happened in 2.0. The new version also uses a configuration file.

Most importantly, SSHGuard has been split into several processes piped into one another (sshg-logmon | sshg-parser | sshg-blocker | sshg-fw). sshg-parser can run with capsicum(4) and pledge(2). sshg-blocker can be sandboxed in its default configuration (without pid file, whitelist, blacklisting) and has not been tested sandboxed in other configurations.

  • Breaking the processes up so that the sensitive bits can be sandboxes is very nice to see

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post How the Dtrace saved Christmas | BSD Now 175 first appeared on Jupiter Broadcasting.

]]>
2016 highlights | BSD Now 174 https://original.jupiterbroadcasting.net/105781/2016-highlights-bsd-now-174/ Thu, 29 Dec 2016 10:27:07 +0000 https://original.jupiterbroadcasting.net/?p=105781 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Links ZFS in the trenches | BSD Now 123 One small step for DRM, one giant leap for BSD | BSD Now 143 The Laporte […]

The post 2016 highlights | BSD Now 174 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Links

The post 2016 highlights | BSD Now 174 first appeared on Jupiter Broadcasting.

]]>
Carry on my Wayland son | BSD Now 173 https://original.jupiterbroadcasting.net/105596/carry-on-my-wayland-son-bsd-now-173/ Wed, 21 Dec 2016 23:46:35 +0000 https://original.jupiterbroadcasting.net/?p=105596 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines syspatch in testing state Antoine Jacoutot ajacoutot@ openbsd has posted a call for testing for OpenBSD’s new syspatch tool syspatch(8), a “binary” patch system […]

The post Carry on my Wayland son | BSD Now 173 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

syspatch in testing state

  • Antoine Jacoutot ajacoutot@ openbsd has posted a call for testing for OpenBSD’s new syspatch tool

syspatch(8), a “binary” patch system for -release is now ready for early testing. This does not use binary diffing to update the system, but regular signed tarballs containing the updated files (ala installer).

I would appreciate feedback on the tool. But please send it directly to me, there’s no need to pollute the list. This is obviously WIP and the tool may or may not change in drastic ways.

These test binary patches are not endorsed by the OpenBSD project and should not be trusted, I am only providing them to get early feedback on the tool. If all goes as planned, I am hoping that syspatch will make it into the 6.1 release; but for it to happen, I need to know how it breaks your systems 🙂

  • Instructions
  • If you test it, report back and let us know how it went

Weston working

  • Over the past few years we’ve had some user-interest in the state of Wayland / Weston on FreeBSD. In the past day or so, Johannes Lundberg has sent in a progress report to the FreeBSD mailing lists.
  • Without further ADO:

We had some progress with Wayland that we’d like to share.

Wayland (v1.12.0)
Working

Weston (v1.12.0)
Working (Porting WIP)

Weston-clients (installed with wayland/weston port)
Working

XWayland (run X11 apps in Wayland compositor)
Works (maximized window only) if started manually but not when
launching X11 app from Weston. Most likely problem with Weston IPC.

Sway (i3-compatible Wayland compositor)
Working

SDL20 (Wayland backend)
games/stonesoup-sdl briefly tested.
https://twitter.com/johalun/status/811334203358867456

GDM (with Wayland)
Halted – depends on logind.

GTK3
gtk3-demo runs fine on Weston (might have to set GDK_BACKEND=wayland
first.
GTK3 apps working (gedit, gnumeric, xfce4-terminal tested, xfce desktop
(4.12) does not yet support GTK3)

  • Johannes goes on to give instructions on how / where you can fetch their WiP and do your own testing. At the moment you’ll need Matt Macy’s newer Intel video work, as well as their ports tree which includes all the necessary software bits.
  • Before anybody asks, yes we are watching this for TrueOS!

Where the rubber meets the road (part two)

  • Continuing with our story from Brian Everly from a week ago, we have an update today on the process to dual-boot OpenBSD with Arch Linux.
  • As we last left off, Arch was up and running on the laptop, but some quirks in the hardware meant OpenBSD would take a bit longer.
  • With those issues resolved and the HD seen again, the next issue that reared its head was OpenBSD not seeing the partition tables on the disk. After much frustration, it was time to nuke and pave, starting with OpenBSD first this time.
  • After a successful GPT partitioning and install of OpenBSD, he went back to installing Arch, and then the story got more interesting.

I installed Arch as I detailed in my last post; however, when I fired up gdisk I got a weird error message:

“Warning! Disk size is smaller than the main header indicates! Loading secondary header from the last sector of the disk! You should use ‘v’ to verify disk integrity, and perhaps options on the expert’s menu to repair the disk.”

Immediately after this, I saw a second warning:

“Caution: Invalid backup GPT header, but valid main header; regenerating backup header from main header.”

And, not to be outdone, there was a third:

“Warning! Main and backup partition tables differ! Use the ‘c’ and ‘e’ options on the recovery & transformation menu to examine the two tables.”

Finally (not kidding), there was a fourth:

“Warning! One or more CRCs don’t match. You should repair the disk!”

Given all of that, I thought to myself, “This is probably why I couldn’t see the disk properly when I partitioned it under Linux on the OpenBSD side. I’ll let it repair things and I should be good to go.” I then followed the recommendation and repaired things, using the primary GPT table to recreate the backup one. I then installed Arch and figured I was good to go.

  • After confirming through several additional re-installs that the behavior was reproducible, he then decided to go full on crazy,and partition with MBR. That in and of itself was a challenge, since as he mentions, not many people dual-boot OpenBSD with Linux on MBR, especially using luks and lvm!
  • If you want to see the details on how that was done, check it out.
  • The story ends in success though! And better yet:

Now that I have everything working, I’ll restore my config and data to Arch, configure OpenBSD the way I like it and get moving. I’ll take some time and drop a note on the tech@ mailing list for OpenBSD to see if they can figure out what the GPT problem was I was running into. Hopefully it will make that part of the code stronger to get an edge-case bug report like this.

  • Take note here, if you run into issues like this with any OS, be sure to document in detail what happened so developers can explore solutions to the issue.

FreeBSD and ZFS as a time capsule for OS X

  • Do you have any Apple users in your life? Perhaps you run FreeBSD for ZFS somewhere else in the house or office. Well today we have a blog post from Mark Felder which shows how you can use FreeBSD as a time-capsule for your OSX systems.
  • The setup is quite simple, to get started you’ll need packages for netatalk3 and avahi-app for service discovery.
  • Next up will be your AFP configuration. He helpfully provides a nice example that you should be able to just cut-n-paste. Be sure to check the hosts allow lines and adjust to fit your network. Also of note will be the backup location and valid users to adjust.
  • A little easier should be the avahi setup, which can be a straight copy-n-paste from the site, which will perform the service advertisements.
  • The final piece is just enabling specific services in /etc/rc.conf and either starting them by hand, or rebooting. At this point your OSX systems should be able to discover the new time-capsule provider on the network and DTRT.

News Roundup

netbenches – FreeBSD network forwarding performance benchmark results


A tcpdump Tutorial and Primer with Examples

  • Most users will be familiar with the basics of using tcpdump, but this tutorial/primer is likely to fill in a lot of blanks, and advance many users understanding of tcpdump
  • “tcpdump is the premier network analysis tool for information security professionals. Having a solid grasp of this über-powerful application is mandatory for anyone desiring a thorough understanding of TCP/IP. Many prefer to use higher level analysis tools such as Wireshark, but I believe this to usually be a mistake.”
  • tcpdump is an important tool for any system or network administrator, it is not just for security. It is often the best way to figure out why the network is not behaving as expected.
  • “In a discipline so dependent on a true understanding of concepts vs. rote learning, it’s important to stay fluent in the underlying mechanics of the TCP/IP suite. A thorough grasp of these protocols allows one to troubleshoot at a level far beyond the average analyst, but mastery of the protocols is only possible through continued exposure to them.”
  • Not just that, but TCP/IP is a very interesting protocol, considering how little it has changed in its 40+ year history
  • “First off, I like to add a few options to the tcpdump command itself, depending on what I’m looking at. The first of these is -n, which requests that names are not resolved, resulting in the IPs themselves always being displayed. The second is -X, which displays both hex and ascii content within the packet.”
  • “It’s also important to note that tcpdump only takes the first 96 bytes of data from a packet by default. If you would like to look at more, add the -s number option to the mix, where number is the number of bytes you want to capture. I recommend using 0 (zero) for a snaplength, which gets everything.”
  • The page has a nice table of the most useful options
  • It also has a great primer on doing basic filtering
  • If you are relatively new to using tcpdump, I highly recommend you spend a few minutes reading through this article

How Unix made it to the top

  • Doug McIlroy gives us a nice background post on how “Unix made it to the top”
  • It’s fairly short / concise, so I felt it would be good to read in its entirety.

It has often been told how the Bell Labs law department became the
first non-research department to use Unix, displacing a newly acquired
stand-alone word-processing system that fell short of the department’s
hopes because it couldn’t number the lines on patent applications,
as USPTO required. When Joe Ossanna heard of this, he told them about
roff and promised to give it line-numbering capability the next day.
They tried it and were hooked. Patent secretaries became remote
members of the fellowship of the Unix lab. In due time the law
department got its own machine.

Less well known is how Unix made it into the head office of AT&T. It
seems that the CEO, Charlie Brown, did not like to be seen wearing
glasses when he read speeches. Somehow his PR assistant learned of
the CAT phototypesetter in the Unix lab and asked whether it might be
possible to use it to produce scripts in large type. Of course it was.
As connections to the top never hurt, the CEO’s office was welcomed
as another ouside user. The cost–occasionally having to develop film
for the final copy of a speech–was not onerous.

Having teethed on speeches, the head office realized that Unix could
also be useful for things that didn’t need phototypesetting. Other
documents began to accumulate in their directory. By the time we became
aware of it, the hoard came to include minutes of AT&T board meetings.
It didn’t seem like a very good idea for us to be keeping records from
the inner sanctum of the corporation on a computer where most everybody
had super-user privileges. A call to the PR guy convinced him of the
wisdom of keeping such things on their own premises. And so the CEO’s
office bought a Unix system.

Just as one hears of cars chosen for their cupholders, so were these
users converted to Unix for trivial reasons: line numbers and vanity.


Odd Comments and Strange Doings in Unix

  • Everybody loves easter-eggs, and today we have some fun odd ones from the history throughout UNIX told by Dennis Ritchie.
  • First up, was a fun one where the “mv” command could sometimes print the following “values of b may give rise to dom!”

Like most of the messages recorded in these compilations, this one was produced in some situation that we considered unlikely or as result of abuse; the details don’t matter. I’m recording why the phrase was selected.

The very first use of Unix in the “real business” of Bell Labs was to type and produce patent applications, and for a while in the early 1970s we had three typists busily typing away in the grotty lab on the sixth floor. One day someone came in and observed on the paper sticking out of one of the Teletypes, displayed in magnificent isolation, this ominous phrase:
values of b may give rise to dom!

It was of course obvious that the typist had interrupted a printout (generating the “!” from the ed editor) and moved up the paper, and that the context must have been something like “varying values of beta may give rise to domain wall movement” or some other fragment of a physically plausible patent application.
But the phrase itself was just so striking! Utterly meaningless, but it looks like what… a warning? What is “dom?”

At the same time, we were experimenting with text-to-voice software by Doug McIlroy and others, and of course the phrase was tried out with it. For whatever reason, its rendition of “give rise to dom!” accented the last word in a way that emphasized the phonetic similarity between “doom” and the first syllable of “dominance.” It pronounced “beta” in the British style, “beeta.” The entire occurrence became a small, shared treasure.
The phrase had to be recorded somewhere, and it was, in the v6 source. Most likely it was Bob Morris who did the deed, but it could just as easily have been Ken.
I hope that your browser reproduces the b as a Greek beta.

  • Next up is one you might have heard before:

/* You are not expected to understand this */
Every now and then on Usenet or elsewhere I run across a reference to a certain comment in the source code of the Sixth Edition Unix operating system.

I’ve even been given two sweatshirts that quote it.

Most probably just heard about it, but those who saw it in the flesh either had Sixth Edition Unix (ca. 1975) or read the annotated version of this system by John Lions (which was republished in 1996: ISBN 1-57298-013-7, Peer-to-Peer Communications).
It’s often quoted as a slur on the quantity or quality of the comments in the Bell Labs research releases of Unix. Not an unfair observation in general, I fear, but in this case unjustified.

So we tried to explain what was going on. “You are not expected to understand this” was intended as a remark in the spirit of “This won’t be on the exam,” rather than as an impudent challenge.

  • There’s a few other interesting stories as well, if the odd/fun side of UNIX history at all interests you, I would recommend checking it out.

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Carry on my Wayland son | BSD Now 173 first appeared on Jupiter Broadcasting.

]]>
A tale of BSD from yore | BSD Now 172 https://original.jupiterbroadcasting.net/105421/a-tale-of-bsd-from-yore-bsd-now-172/ Thu, 15 Dec 2016 04:07:27 +0000 https://original.jupiterbroadcasting.net/?p=105421 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines Call For Testing: OpenSSH 7.4 Getting ready to head into the holidays for for the end of 2016 means some of us will have […]

The post A tale of BSD from yore | BSD Now 172 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

Call For Testing: OpenSSH 7.4

  • Getting ready to head into the holidays for for the end of 2016 means some of us will have spare time on our hands. What a perfect time to get some call for testing work done!
  • Damien Miller has issued a public CFT for the upcoming OpenSSH 7.4 release, which considering how much we all rely on SSH I would expect will get some eager volunteers for testing.
  • What are some of the potential breakers?
  • This release removes server support for the SSH v.1 protocol.

  • ssh(1): Remove 3des-cbc from the client’s default proposal. 64-bit
    block ciphers are not safe in 2016 and we don’t want to wait until
    attacks like SWEET32 are extended to SSH. As 3des-cbc was the
    only mandatory cipher in the SSH RFCs, this may cause problems
    connecting to older devices using the default configuration,
    but it’s highly likely that such devices already need explicit
    configuration for key exchange and hostkey algorithms already
    anyway.

  • sshd(8): Remove support for pre-authentication compression.
    Doing compression early in the protocol probably seemed reasonable
    in the 1990s, but today it’s clearly a bad idea in terms of both
    cryptography (cf. multiple compression oracle attacks in TLS) and
    attack surface. Pre-auth compression support has been disabled by
    default for >10 years. Support remains in the client.

  • ssh-agent will refuse to load PKCS#11 modules outside a whitelist
    of trusted paths by default. The path whitelist may be specified
    at run-time.

  • sshd(8): When a forced-command appears in both a certificate and
    an authorized keys/principals command= restriction, sshd will now
    refuse to accept the certificate unless they are identical.
    The previous (documented) behaviour of having the certificate
    forced-command override the other could be a bit confusing and
    error-prone.

  • sshd(8): Remove the UseLogin configuration directive and support
    for having /bin/login manage login sessions.“

  • What about new features? 7.4 has some of those to wake you up also:
  • ssh(1): Add a proxy multiplexing mode to ssh(1) inspired by the
    version in PuTTY by Simon Tatham. This allows a multiplexing
    client to communicate with the master process using a subset of
    the SSH packet and channels protocol over a Unix-domain socket,
    with the main process acting as a proxy that translates channel
    IDs, etc. This allows multiplexing mode to run on systems that
    lack file- descriptor passing (used by current multiplexing
    code) and potentially, in conjunction with Unix-domain socket
    forwarding, with the client and multiplexing master process on
    different machines. Multiplexing proxy mode may be invoked using
    ssh -O proxy …

  • sshd(8): Add a sshd_config DisableForwaring option that disables
    X11, agent, TCP, tunnel and Unix domain socket forwarding, as well
    as anything else we might implement in the future. Like the
    ‘restrict’ authorized_keys flag, this is intended to be a simple
    and future-proof way of restricting an account.

  • sshd(8), ssh(1): Support the “curve25519-sha256” key exchange
    method. This is identical to the currently-support method named
    “curve25519-sha256@libssh.org”.

  • sshd(8): Improve handling of SIGHUP by checking to see if sshd is
    already daemonised at startup and skipping the call to daemon(3)
    if it is. This ensures that a SIGHUP restart of sshd(8) will
    retain the same process-ID as the initial execution. sshd(8) will
    also now unlink the PidFile prior to SIGHUP restart and re-create
    it after a successful restart, rather than leaving a stale file in
    the case of a configuration error. bz#2641

  • sshd(8): Allow ClientAliveInterval and ClientAliveCountMax
    directives to appear in sshd_config Match blocks.

  • sshd(8): Add %-escapes to AuthorizedPrincipalsCommand to match
    those supported by AuthorizedKeysCommand (key, key type,
    fingerprint, etc.) and a few more to provide access to the
    contents of the certificate being offered.

  • Added regression tests for string matching, address matching and
    string sanitisation functions.

  • Improved the key exchange fuzzer harness.

  • Get those tests done and be sure to send feedback, both positive and negative.

How My Printer Caused Excessive Syscalls & UDP Traffic

“3,000 syscalls a second, on an idle machine? That doesn’t seem right. I just booted this machine. The only processes running are those required to boot the SmartOS Global Zone, which is minimal.”

This is a story from 2014, about debugging a machine that was being slowed down by excessive syscalls and UDP traffic. It is also an excellent walkthrough of the basics of DTrace

“Well, at least I have DTrace. I can use this one-liner to figure out what syscalls are being made across the entire system.”

dtrace -n ‘syscall:::entry { @[probefunc,probename] = count(); }’

“Wow! That is a lot of lwp_sigmask calls. Now that I know what is being called, it’s time to find out who is doing the calling? I’ll use another one-liner to show me the most common user stacks invoking lwp_sigmask.”

dtrace -n ‘syscall::lwp_sigmask:entry { @[ustack()] = count(); }’

“Okay, so this mdnsd code is causing all the trouble. What is the distribution of syscalls for the mdnsd program?”

dtrace -n ‘syscall:::entry /execname == “mdnsd”/ { @[probefunc] = count(); } tick-1s { exit(0); }’

“Lots of signal masking and polling. What the hell! Why is it doing this? What is mdnsd anyways? Is there a man page? Googling for mdns reveals that it is used for resolving host names in small networks, like my home network. It uses UDP, and requires zero configuration. Nothing obvious to explain why it’s flipping out. I feel helpless. I turn to the only thing I can trust, the code.”

“Woah boy, this is some messy looking code. This would not pass illumos cstyle checks. Turns out this is code from Darwin—the kernel of OSX.”

“Hmmm…an idea pops into my computer animal brain. I wonder…I wonder if my MacBook is also experiencing abnormal syscall rates? Nooo, that can’t be it. Why would both my SmartOS server and MacBook both have the same problem? There is no good technical reason to link these two. But, then again, I’m dealing with computers here, and I’ve seen a lot of strange things over the years—I switch to my laptop.”

sudo dtrace -n ‘syscall::: { @[execname] = count(); } tick-1s { exit(0); }’

Same thing, except mdns is called discoverd on OS X

“I ask my friend Steve Vinoski to run the same DTrace one-liner on his OSX machines. He has both Yosemite and the older Mountain Lion. But, to my dismay, neither of his machines are exhibiting high syscall rates. My search continues.”

“Not sure what to do next, I open the OSX Activity Monitor. In desperation I click on the Network tab.”

“ HOLE—E—SHIT! Two-Hundred-and-Seventy Million packets received by discoveryd. Obviously, I need to stop looking at code and start looking at my network. I hop back onto my SmartOS machine and check network interface statistics.”

“Whatever is causing all this, it is sending about 200 packets a second. At this point, the only thing left to do is actually inspect some of these incoming packets. I run snoop(1M) to collect events on the e1000g0 interface, stopping at about 600 events. Then I view the first 15.”

“ A constant stream of mDNS packets arriving from IP 10.0.1.8. I know that this IP is not any of my computers. The only devices left are my iPhone, AppleTV, and Canon printer. Wait a minute! The printer! Two days earlier I heard some beeping noises…”

“I own a Canon PIXMA MG6120 printer. It has a touch interface with a small LCD at the top, used to set various options. Since it sits next to my desk I sometimes lay things on top of it like a book or maybe a plate after I’m done eating. If I lay things in the wrong place it will activate the touch interface and cause repeated pressing. Each press makes a beeping noise. If the object lays there long enough the printer locks up and I have to reboot it. Just such events occurred two days earlier.”

“I fire up dladm again to monitor incoming packets in realtime. Then I turn to the printer. I move all the crap off of it: two books, an empty plate, and the title for my Suzuki SV650 that I’ve been meaning to sell for the last year. I try to use the touch screen on top of the printer. It’s locked up, as expected. I cut power to the printer and whip my head back to my terminal.”

No more packet storm

“Giddy, I run DTrace again to count syscalls.”

“I’m not sure whether to laugh or cry. I laugh, because, LOL computers. There’s some new dumb shit you deal with everyday, better to roll with the punches and laugh. You live longer that way. At least I got to flex my DTrace muscles a bit. In fact, I felt a bit like Brendan Gregg when he was debugging why OSX was dropping keystrokes.”

“I didn’t bother to root cause why my printer turned into a UDP machine gun. I don’t intend to either. I have better things to do, and if rebooting solves the problem then I’m happy. Besides, I had to get back to what I was trying to do six hours before I started debugging this damn thing.”

There you go. The Internet of Terror has already been on your LAN for years.


Making Getaddrinfo Concurrent in Python on Mac OS and BSD

  • We have a very fun blog post today to pass along originally authored by “A. Jesse Jiryu Davis”. Specifically the tale of one man’s quest to unify the Getaddrinfo in Python with Mac OS and BSD.
  • To give you a small taste of this tale, let us pass along just the introduction

“Tell us about the time you made DNS resolution concurrent in Python on Mac and BSD.
No, no, you do not want to hear that story, my friends. It is nothing but old lore and #ifdefs.

But you made Python more scalable. The saga of Steve Jobs was sung to you by a mysterious wizard with a fanciful nickname! Tell us!

Gather round, then. I will tell you how I unearthed a lost secret, unbound Python from old shackles, and banished an ancient and horrible Mutex Troll. Let us begin at the beginning.“

  • Is your interest piqued? It should be. I’m not sure we could do this blog post justice trying to read it aloud here, but definetly recommend if you want to see how he managed to get this bit of code working cross platform. (And it’s highly entertaining as well)

“A long time ago, in the 1980s, a coven of Berkeley sorcerers crafted an operating system. They named it after themselves: the Berkeley Software Distribution, or BSD. For generations they nurtured it, growing it and adding features. One night, they conjured a powerful function that could resolve hostnames to IPv4 or IPv6 addresses. It was called getaddrinfo. The function was mighty, but in years to come it would grow dangerous, for the sorcerers had not made getaddrinfo thread-safe.”

“As ages passed, BSD spawned many offspring. There were FreeBSD, OpenBSD, NetBSD, and in time, Mac OS X. Each made its copy of getaddrinfo thread safe, at different times and different ways. Some operating systems retained scribes who recorded these events in the annals. Some did not.”

  • The story continues as our hero battles the Mutex Troll and quests for ancient knowledge

“Apple engineers are not like you and me — they are a shy and secretive folk. They publish only what code they must from Darwin. Their comings and goings are recorded in no bug tracker, their works in no changelog. To learn their secrets, one must delve deep.”

“There is a tiny coven of NYC BSD users who meet at the tavern called Stone Creek, near my dwelling. They are aged and fierce, but I made the Sign of the Trident and supplicated them humbly for advice, and they were kindly to me.”

  • Spoiler: “Without a word, the mercenary troll shouldered its axe and trudged off in search of other patrons on other platforms. Never again would it hold hostage the worthy smiths forging Python code on BSD.”

Using release(7) to create FreeBSD images for OpenStack

  • Following a recent episode where we covered a walk through on how to create FreeBSD guest OpenStack images, we wondered if it would be possible to integrate this process into the FreeBSD release(7) process, so they images could be generated consistently and automatically
  • Being the awesome audience that you are, one of you responded by doing exactly that

“During a recent BSDNow podcast, Allan and Kris mentioned that it would be nice to have a tutorial on how to create a FreeBSD image for OpenStack using the official release(7) tools. With that, it came to me that: #1 I do have access to an OpenStack environment and #2 I am interested in having FreeBSD as a guest image in my environment. Looks like I was up for the challenge.”

“Previously, I’ve had success running FreeBSD 11.0-RELEASE on OpenStack but more could/should be done. For instance, as suggested by Allan, wouldn’t be nice to deploy the latest code from FreeBSD ? Running -STABLE or even -CURRENT ? Yes, it would. Also, wouldn’t it be nice to customize these images for a specific need? I’d say ‘Yes’ for that as well.”

“After some research I found that the current openstack.conf file, located at /usr/src/release/tools/ could use some extra tweaks to get where I wanted. I’ve created and attached that to a bugzilla on the same topic. You can read about that here.”

  • Steps:
    • Fetch the FreeBSD source code and extract it under /usr/src
    • Once the code is in place, follow the regular process of build(7) and perform a make buildworld buildkernel
    • Change into the release directory (/usr/src/release) and perform a make cloudware
    • make cloudware-release WITH_CLOUDWARE=yes CLOUDWARE=OPENSTACK VMIMAGE=2G

“That’s it! This will generate a qcow2 image with 1.4G in size and a raw image of 2G. The entire process uses the release(7) toolchain to generate the image and should work with newer versions of FreeBSD.”


Interview – Rod Grimes – rgrimes@freebsd.org


News Roundup

Configuring the FreeBSD automounter

  • Ever had to configure the FreeBSD auto-mounting daemon? Today we have a blog post that walks us through a few of the configuration knobs you have at your disposal.
  • First up, Tom shows us his /etc/fstab file, and the various UFS partitions he has setup with the ‘noauto’ flag so they are not mounted at system boot.
  • His amd.conf file is pretty basic, with just options enabled to restart mounts, and unmount on exit.
  • Where most users will most likely want to pay attention is in the crafting of an amd.map file
  • Within this file, we have the various command-foo which performs mounts and unmounts of targeted disks / file-systems on demand.
  • Pay special attention to all the special chars, since those all matter and a stray or missing ; could be a source of failure.
  • Lastly a few knobs in rc.conf will enable the various services and a reboot should confirm the functionality.

l2k16 hackathon report: LibreSSL manuals now in mdoc(7)

  • Hackathon report by Ingo Schwarze

“Back in the spring two years ago, Kristaps Dzonsons started the pod2mdoc(1) conversion utility, and less than a month later, the LibreSSL project began. During the general summer hackathon in the same year, g2k14, Anthony Bentley started using pod2mdoc(1) for converting LibreSSL manuals to mdoc(7).”

“Back then, doing so still was a pain, because pod2mdoc(1) was still full of bugs and had gaping holes in functionality. For example, Anthony was forced to basically translate the SYNOPSIS sections by hand, and to fix up .Fn and .Xr in the body by hand as well. All the same, he speedily finished all of libssl, and in the autumn of the same year, he mustered the courage to commit his work.”

“Near the end of the following winter, i improved the pod2mdoc(1) tool to actually become convenient in practice and started work on libcrypto, converting about 50 out of the about 190 manuals. Max Fillinger also helped a bit, converting a handful of pages, but i fear i tarried too much checking and committing his work, so he quickly gave up on the task. After that, almost nothing happened for a full year.”

“Now i was finally fed up with the messy situation and decided to put an end to it. So i went to Toulouse and finished the conversion of the remaining 130 manual pages in libcrypto, such that you can now view the documentation of all functions”


Interactive Terminal Utility: smenu

  • Ok, I’ve made no secret of my love for shell scripting. Well today we have a new (somewhat new to us) tool to bring your way.
  • Have you ever needed to deal with large lists of data, perhaps as the result of a long specially crafted pipe?
  • What if you need to select a specific value from a range and then continue processing?
  • Enter ‘smenu’ which can help make your scripting life easier.

“smenu is a selection filter just like sed is an editing filter.

This simple tool reads words from the standard input, presents them in a cool interactive window after the current line on the terminal and writes the selected word, if any, on the standard output.

After having unsuccessfully searched the NET for what I wanted, I decided to try to write my own.

I have tried hard to made its usage as simple as possible. It should work, even when using an old vt100 terminal and is UTF-8 aware.“

  • What this means, is in your interactive scripts, you can much easier present the user with a cursor driven menu to select from a range of possible choices. (Without needing to craft a bunch of dialog flags)
  • Take a look, and hopefully you’ll be able to find creative uses for your shell scripts in the future.

Ubuntu still isn’t free software

“Any redistribution of modified versions of Ubuntu must be approved, certified or provided by Canonical if you are going to associate it with the Trademarks. Otherwise you must remove and replace the Trademarks and will need to recompile the source code to create your own binaries. This does not affect your rights under any open source licence applicable to any of the components of Ubuntu. If you need us to approve, certify or provide modified versions for redistribution you will require a licence agreement from Canonical, for which you may be required to pay. For further information, please contact us”

“Mark Shuttleworth just blogged about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don’t work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.”

“The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn’t, that’s probably an infringement of the trademark and it’s entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical’s IP policy goes much further than that – it can be interpreted as meaning[1] that you can’t distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu. [1]: And by “interpreted as meaning” I mean that’s what it says and Canonical refuse to say otherwise”

“If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn’t infringe trademark law), and they say no or insist you need an additional contract, it’s not free software. If they insist that you recompile source code before you can give copies to someone else, it’s not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can’t use their trademarks in non-infringing ways, that’s still not free software.”

“Canonical’s IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.”


Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post A tale of BSD from yore | BSD Now 172 first appeared on Jupiter Broadcasting.

]]>
The APU, BSD Style! | BSD Now 171 https://original.jupiterbroadcasting.net/105291/the-apu-bsd-style-bsd-now-171/ Thu, 08 Dec 2016 01:32:32 +0000 https://original.jupiterbroadcasting.net/?p=105291 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines OpenBSD on PC Engines APU2 A detailed walkthrough of building an OpenBSD firewall on a PC Engines APU2 It starts with a breakdown of […]

The post The APU, BSD Style! | BSD Now 171 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

OpenBSD on PC Engines APU2

  • A detailed walkthrough of building an OpenBSD firewall on a PC Engines APU2
  • It starts with a breakdown of the parts that were purchases, totally around $200
  • Then the reader is walked through configuring the serial console, flashing the ROM, and updating the BIOS
  • The next step is actually creating a custom OpenBSD install image, and pre-configuring its serial console. Starting with OpenBSD 6.0, this step is done automatically by the installer
  • Installation:
    • Power off the APU2
    • Insert the bootable OpenBSD installer USB flash drive to one of the USB slots on the APU2
    • Power on the APU2, press F10 to get to the boot menu, and choose to boot from USB (usually option number 1)
    • At the boot> prompt, remember the serial console settings (see above)
    • Also at the boot> prompt, press Enter to start the installer
    • Follow the installation instructions

The driver used for wireless networking is athn(4). It might not work properly out of the box. Once OpenBSD is installed, run fw_update with no arguments. It will figure out which firmware updates are required and will download and install them. When it finishes, reboot.


Where the rubber meets the road… (part one)

  • A user describes their adventures installing OpenBSD and Arch Linux on a new Lenovo X1 Carbon (4th gen, skylake)
  • They also detail why they moved away from their beloved Macbook, which while long, does describe a journey away from Apple that we’ve heard elsewhere.
  • The journey begins with getting a new Windows laptop, shrinking the partition and creating space for a triple-boot install, of Windows / Arch / OpenBSD
  • Brian then details how he setup the partitioning and performed the initial Arch installation, getting it tuned to his specifications.
  • Next up was OpenBSD though, and that went sideways initially due to a new NVMe drive that wasn’t fully supported (yet)
  • The article is split into two parts (we will bring you the next installment at a future date), but he leaves us with the plan of attack to build a custom OpenBSD kernel with corrected PCI device identifiers.
  • We wish Brian luck, and look forward to the “rest of the story” soon.

Howto setup a FreeBSD jail server using iocage and ansible.

  • Setting up a FreeBSD jail server can be a daunting task. However when a guide comes along which shows you how to do that, including not exposing a single (non-jailed) port to the outside world, you know we had a take a closer look.
  • This guide comes to us from GitHub, courtesy of Joerg Fielder.
  • The project goals seem notable:

  • Ansible playbook that creates a FreeBSD server which hosts multiple jails.

    • Travis is used to run/test the playbook.
    • No service on the host is exposed externally.
    • All external connections terminate within a jail.
    • Roles can be reused using Ansible Galaxy.
    • Combine any of those roles to create FreeBSD server, which perfectly suits you.
  • To get started, you’ll need a machine with Ansible, Vagrant and VirtualBox, and your credentials to AWS if you want it to automatically create / destroy EC2 instances.
  • There’s already an impressive list of Anisible roles created for you to start with:

    • freebsd-build-server – Creates a FreeBSD poudriere build server
    • freebsd-jail-host – FreeBSD Jail host
    • freebsd-jailed – Provides a jail
    • freebsd-jailed-nginx – Provides a jailed nginx server
    • freebsd-jailed-php-fpm – Creates a php-fpm pool and a ZFS dataset which is used as web root by php-fpm
    • freebsd-jailed-sftp – Installs a SFTP server
    • freebsd-jailed-sshd – Provides a jailed sshd server.
    • freebsd-jailed-syslogd – Provides a jailed syslogd
    • freebsd-jailed-btsync – Provides a jailed btsync instance server
    • freebsd-jailed-joomla – Installs Joomla
    • freebsd-jailed-mariadb – Provides a jailed MariaDB server
    • freebsd-jailed-wordpress – Provides a jailed WordPress server.
  • Since the machines have to be customized before starting, he mentions that cloud-init is used to do the following:

  • activate pf firewall

  • add a pass all keep state rule to pf to keep track of connection states, which in turn allows you to reload the pf service without losing the connection
  • install the following packages:
    • sudo
    • bash
    • python27
  • allow passwordless sudo for user ec2-user

  • From there it is pretty straight-forward, just a couple commands to spin up the VM’s either locally on your VirtualBox host, or in the cloud with AWS. Internally the VM’s are auto-configured with iocage to create jails, where all your actual services run.

  • A neat project, check it out today if you want a shake-n-bake type cloud + jail solution.

Colin Percival’s bsdiff helps reduce Android apk bandwidth usage by 6 petabytes per day

  • A post on the official Android-Developers blog, talks about how they used bsdiff (and bspatch) to reduce the size of Android application updates by 65%
  • bsdiff was developed by FreeBSD’s Colin Percival

Earlier this year, we announced that we started using the bsdiff algorithm (by Colin Percival). Using bsdiff, we were able to reduce the size of app updates on average by 47% compared to the full APK size.

  • This post is actually about the second generation of the code.

Today, we’re excited to share a new approach that goes further — File-by-File patching. App Updates using File-by-File patching are, on average, 65% smaller than the full app, and in some cases more than 90% smaller.
Android apps are packaged as APKs, which are ZIP files with special conventions. Most of the content within the ZIP files (and APKs) is compressed using a technology called Deflate. Deflate is really good at compressing data but it has a drawback: it makes identifying changes in the original (uncompressed) content really hard. Even a tiny change to the original content (like changing one word in a book) can make the compressed output of deflate look completely different. Describing the differences between the original content is easy, but describing the differences between the compressed content is so hard that it leads to inefficient patches.

  • So in the second generation of the code, they use bsdiff on each individual file, then package that, rather than diffing the original and new archives
  • bsdiff is used in a great many other places, including shrinking the updates for the Firefox and Chrome browsers
  • You can find out more about bsdiff here: https://www.daemonology.net/bsdiff/

A far more sophisticated algorithm, which typically provides roughly 20% smaller patches, is described in my doctoral thesis.

  • Considering the gains, it is interesting that no one has implemented Colin’s more sophisticated algorithm
  • Colin had an interesting observation last night: “I just realized that bandwidth savings due to bsdiff are now roughly equal to what the total internet traffic was when I wrote it in 2003.”

News Roundup

Distrowatch does an in-depth review of NAS4Free

  • Jesse Smith over at DistroWatch has done a pretty in-depth review of Nas4Free.
  • The review starts with mentioning that NAS4Free works on 3 platforms, ARM/i386/AMD64 and for the purposes of this review he would be using AMD64 builds.
  • After going through the initial install (doing typical disk management operations, such as GPT/MBR, etc) he was ready to begin using the product.
  • One concern originally observed was that the initial boot seemed rather slow. Investigation revealed this was due to it loading the entire OS image into memory, and the first (long) disk read did take some time, but once loaded was super responsive.
  • The next steps involved doing the initial configuration, which meant creating a new ZFS storage pool. After this process was done, he did find one puzzling UI option called “VM” which indicated it can be linked to VirtualBox in some way, but the Docs didn’t reveal its secrets of usage.
  • Additionally covered were some of the various “Access” methods, including traditional UNIX permissions, AD and LDAP, and then various Sharing services which are typical to a NAS, Such as NFS / Samba and others.
  • One neat feature was the built-in file-browser via the web-interface, which allows you another method of getting at your data when sometimes NFS / Samba or WebDav aren’t enough.
  • Jesse gives us a nice round-up conclusion as well

Most of the NAS operating systems I have used in the past were built around useful features. Some focused on making storage easy to set up and manage, others focused on services, such as making files available over multiple protocols or managing torrents. Some strive to be very easy to set up. NAS4Free does pretty well in each of the above categories. It may not be the easiest platform to set up, but it’s probably a close second. It may not have the prettiest interface for managing settings, but it is quite easy to navigate. NAS4Free may not have the most add-on services and access protocols, but I suspect there are more than enough of both for most people.

Where NAS4Free does better than most other solutions I have looked at is security. I don’t think the project’s website or documentation particularly focuses on security as a feature, but there are plenty of little security features that I liked. NAS4Free makes it very easy to lock the text console, which is good because we do not all keep our NAS boxes behind locked doors. The system is fairly easy to upgrade and appears to publish regular security updates in the form of new firmware. NAS4Free makes it fairly easy to set up user accounts, handle permissions and manage home directories. It’s also pretty straight forward to switch from HTTP to HTTPS and to block people not on the local network from accessing the NAS’s web interface.

All in all, I like NAS4Free. It’s a good, general purpose NAS operating system. While I did not feel the project did anything really amazing in any one category, nor did I run into any serious issues. The NAS ran as expected, was fairly straight forward to set up and easy to manage. This strikes me as an especially good platform for home or small business users who want an easy set up, some basic security and a solid collection of features.


Browsix: Unix in the browser tab

  • Browsix is a research project from the PLASMA lab at the University of Massachusetts, Amherst.
  • The goal: Run C, C++, Go and Node.js programs as processes in browsers, including LaTeX, GNU Make, Go HTTP servers, and POSIX shell scripts.
  • Processes are built on top of Web Workers, letting applications run in parallel and spawn subprocesses. System calls include fork, spawn, exec, and wait.

Pipes are supported with pipe(2) enabling developers to compose processes into pipelines.

Sockets include support for TCP socket servers and clients, making it possible to run applications like databases and HTTP servers together with their clients in the browser.

  • Browsix comprises two core parts:
    • A kernel written in TypeScript that makes core Unix features (including pipes, concurrent processes, signals, sockets, and a shared file system) available to web applications.
    • Extended JavaScript runtimes for C, C++, Go, and Node.js that support running programs written in these languages as processes in the browser.
  • This seems like an interesting project, although I am not sure how it would be used as more than a toy

Book Review: PAM Mastery

  • nixCraft does a book review of Michael W. Lucas’ “Pam Mastery”

Linux, FreeBSD, and Unix-like systems are multi-user and need some way of authenticating individual users. Back in the old days, this was done in different ways. You need to change each Unix application to use different authentication scheme.

  • Before PAM, if you wanted to use an SQL database to authenticate users, you had to write specific support for that into each of your applications. Same for LDAP, etc.

So Open Group lead to the development of PAM for the Unix-like system. Today Linux, FreeBSD, MacOS X and many other Unix-like systems are configured to use a centralized authentication mechanism called Pluggable Authentication Modules (PAM). The book “PAM Mastery” deals with the black magic of PAM.

  • Of course, each OS chose to implement PAM a little bit differently

The book starts with the basic concepts about PAM and authentication. You learn about Multi-Factor Authentication and why use PAM instead of changing each program to authenticate the user. The author went into great details about why PAM is useful for developers and sysadmin for several reasons. The examples cover CentOS Linux (RHEL and clones), Debian Linux, and FreeBSD Unix system.

I like the way the author described PAM Configuration Files and Common Modules that covers everyday scenarios for the sysadmin. PAM configuration file format and PAM Module Interfaces are discussed in easy to understand language. Control flags in PAM can be very confusing for new sysadmins. Modules can be stacked in a particular order, and the control flags determine how important the success or failure of a particular module.

There is also a chapter about using one-time passwords (Google Authenticator) for your application.

The final chapter is all about enforcing good password policies for users and apps using PAM.

The sysadmin would find this book useful as it covers a common authentication scheme that can be used with a wide variety of applications on Unix. You will master PAM topics and take control over authentication for your organization IT infrastructure. If you are Linux or Unix sysadmin, I would highly recommend this book. Once again Michael W Lucas nailed it. The only book you may need for PAM deployment.


Reflections on Trusting Trust – Ken Thompson, co-author of UNIX

Ken Thompson’s “cc hack” – Presented in the journal, Communication of the ACM, Vol. 27, No. 8, August 1984, in a paper entitled “Reflections on Trusting Trust”, Ken Thompson, co-author of UNIX, recounted a story of how he created a version of the C compiler that, when presented with the source code for the “login” program, would automatically compile in a backdoor to allow him entry to the system. This is only half the story, though. In order to hide this trojan horse, Ken also added to this version of “cc” the ability to recognize if it was recompiling itself to make sure that the newly compiled C compiler contained both the “login” backdoor, and the code to insert both trojans into a newly compiled C compiler. In this way, the source code for the C compiler would never show that these trojans existed.

  • The article starts off by talking about a content to write a program that produces its own source code as output. Or rather, a C program, that writes a C program, that produces its own source code as output.

The C compiler is written in C. What I am about to describe is one of many “chicken and egg” problems that arise when compilers are written in their own language. In this case, I will use a specific example from the C compiler.

Suppose we wish to alter the C compiler to include the sequence “\v” to represent the vertical tab character. The extension to Figure 2 is obvious and is presented in Figure 3. We then recompile the C compiler, but we get a diagnostic. Obviously, since the binary version of the compiler does not know about “\v,” the source is not legal C. We must “train” the compiler. After it “knows” what “\v” means, then our new change will become legal C. We look up on an ASCII chart that a vertical tab is decimal 11. We alter our source to look like Figure 4. Now the old compiler accepts the new source. We install the resulting binary as the new official C compiler and now we can write the portable version the way we had it in Figure 3.

The actual bug I planted in the compiler would match code in the UNIX “login” command. The replacement code would miscompile the login command so that it would accept either the intended encrypted password or a particular known password. Thus if this code were installed in binary and the binary were used to compile the login command, I could log into that system as any user. Such blatant code would not go undetected for long. Even the most casual perusal of the source of the C compiler would raise suspicions.

Next “simply add a second Trojan horse to the one that already exists. The second pattern is aimed at the C compiler. The replacement code is a Stage I self-reproducing program that inserts both Trojan horses into the compiler. This requires a learning phase as in the Stage II example. First we compile the modified source with the normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of the compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere.

So now there is a trojan’d version of cc. If you compile a clean version of cc, using the bad cc, you will get a bad cc. If you use the bad cc to compile the login program, it will have a backdoor. The source code for both backdoors no longer exists on the system. You can audit the source code of cc and login all you want, they are trustworthy.

The compiler you use to compile your new compiler, is the untrustworthy bit, but you have no way to know it is untrustworthy, and no way to make a new compiler, without using the bad compiler.

The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.

Acknowledgment: I first read of the possibility of such a Trojan horse in an Air Force critique of the security of an early implementation of Multics. I can- not find a more specific reference to this document. I would appreciate it if anyone who can supply this reference would let me know.


Beastie Bits

From December the 27th until the 30th there the 33rd Chaos Communication Congress[0] is going to take place in Hamburg, Germany. Think of it as the yearly gathering of the european hackerscene and their overseas friends. I am one of the persons organizing the “BSD assembly” as a gathering place for BSD enthusiasts and waving the flag amidst the all the other projects / communities.


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post The APU, BSD Style! | BSD Now 171 first appeared on Jupiter Broadcasting.

]]>
Sandboxing Cohabitation | BSD Now 170 https://original.jupiterbroadcasting.net/105116/sandboxing-cohabitation-bsd-now-170/ Thu, 01 Dec 2016 03:52:34 +0000 https://original.jupiterbroadcasting.net/?p=105116 RSS Feeds: MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed Become a supporter on Patreon: – Show Notes: – Headlines EuroBSDcon 2016 Presentation Slides Due to circumstances beyond the control of the organizers of EuroBSDCon, there were not recordings of the talks given at […]

The post Sandboxing Cohabitation | BSD Now 170 first appeared on Jupiter Broadcasting.

]]>
RSS Feeds:

MP3 Feed | OGG Feed | iTunes Feed | Video Feed | HD Vid Feed | HD Torrent Feed

Become a supporter on Patreon:

Patreon

– Show Notes: –

Headlines

EuroBSDcon 2016 Presentation Slides

  • Due to circumstances beyond the control of the organizers of EuroBSDCon, there were not recordings of the talks given at the event.
  • However, they have collected the slide decks from each of the speakers and assembled them on this page for you
  • Also, we have some stuff from MeetBSD already:
  • Youtube Playlist
  • Not all of the sessions are posted yet, but the rest should appear shortly
  • MeetBSD 2016 Trip Report: Domagoj Stolfa

Cohabiting FreeBSD and Gentoo Linux on a Common ZFS Volume

  • Eric McCorkle, who has contributed ZFS support to the FreeBSD EFI boot-loader code has posted an in-depth look at how he’s setup dual-boot with FreeBSD and Gentoo on the same ZFS volume.
  • He starts by giving us some background on how the layout is done. First up, GRUB is used as the boot-loader, allowing boot of both Linux and BSD
  • The next non-typical thing was using /etc/fstab to manage mount-points, instead of the typical ‘zfs mount’ usage, (apart from /home datasets)

  • data/home is mounted to /home, with all of its child datasets using the ZFS mountpoint system

  • data/freebsd and its child datasets house the FreeBSD system, and all have their mountpoints set to legacy
  • data/gentoo and its child datasets house the Gentoo system, and have their mountpoints set to legacy as well

  • So, how did he set this up? He helpfully provides an overview of the steps:

    • Use the FreeBSD installer to create the GPT and ZFS pool
    • Install and configure FreeBSD, with the native FreeBSD boot loader
    • Boot into FreeBSD, create the Gentoo Linux datasets, install GRUB
    • Boot into the Gentoo Linux installer, install Gentoo
    • Boot into Gentoo, finish any configuration tasks
  • The rest of the article walks us through the individual commands that make up each of those steps, as well as how to craft a GRUB config file capable of booting both systems.

  • Personally, since we are using EFI, I would have installed rEFInd, and chain-loaded each systems EFI boot code from there, allowing the use of the BSD loader, but to each their own!

HardenedBSD introduces Safestack into base

  • HardenedBSD has integrated SafeStack into its base system and ports tree
  • SafeStack is part of the Code Pointer Integrity (CPI) project within clang.
  • “SafeStack is an instrumentation pass that protects programs against attacks based on stack buffer overflows, without introducing any measurable performance overhead. It works by separating the program stack into two distinct regions: the safe stack and the unsafe stack. The safe stack stores return addresses, register spills, and local variables that are always accessed in a safe way, while the unsafe stack stores everything else. This separation ensures that buffer overflows on the unsafe stack cannot be used to overwrite anything on the safe stack.”
  • “As of 28 November 2016, with clang 3.9.0, SafeStack only supports being applied to applications and not shared libraries. Multiple patches have been submitted to clang by third parties to add support for shared libraries.”
  • SafeStack is only enabled on AMD64

pledge(2)… or, how I learned to love web application sandboxing

  • We’ve talked about OpenBSD’s sandboxing mechanism pledge() in the past, but today we have a great article by Kristaps Dzonsons, about how he grew to love it for Web Sandboxing.
  • First up, he gives us his opening argument that should make most of you sit up and listen:

I use application-level sandboxing a lot because I make mistakes a lot; and when writing web applications, the price of making mistakes is very dear.

In the early 2000s, that meant using systrace(4) on OpenBSD and NetBSD. Then it was seccomp(2) (followed by libseccomp(3)) on Linux. Then there was capsicum(4) on FreeBSD and sandbox_init(3) on Mac OS X.

All of these systems are invoked differently; and for the most part, whenever it came time to interface with one of them, I longed for sweet release from the nightmare. Please, try reading seccomp(2). To the end. Aligning web application logic and security policy would require an arduous (and usually trial-and-error or worse, copy-and-paste) process. If there was any process at all — if the burden of writing a policy didn’t cause me to abandon sandboxing at the start.

And then there was pledge(2).

This document is about pledge(2) and why you should use it and love it. “

  • Not convinced yet? Maybe you should take his challenge:

Let’s play a drinking game. The challenge is to stay out of the hospital.

1.Navigate to seccomp(2).
2. Read it to the end.
3. Drink every time you don’t understand.

For capsicum(4), the challenge is no less difficult. To see these in action, navigate no further than OpenSSH, which interfaces with these sandboxes: sandbox-seccomp-filter.c or sandbox-capsicum.c. (For a history lesson, you can even see sandbox-systrace.c.) Keep in mind that these do little more than restrict resources to open descriptors and the usual necessities of memory, signals, timing, etc. Keep that in mind and be horrified. “

  • Now Kristaps has his theory on why these are so difficult (NS..), but perhaps there is a better way. He makes the case that pledge() sits right in that sweet-spot, being powerful enough to be useful, but easy enough to implement that developers might actually use it.
  • All in all, a nice read, check it out! Would love to hear other developer success stories using pledge() as well.

News Roundup

Unix history repository, now on GitHub

  • OS News has an interesting tidbit on their site today, about the entire commit history of Unix now being available online, starting all the way back in 1970 and bringing us forward to today.

  • From the README

The history and evolution of the Unix operating system is made available as a revision management repository, covering the period from its inception in 1970 as a 2.5 thousand line kernel and 26 commands, to 2016 as a widely-used 27 million line system. The 1.1GB repository contains about half a million commits and more than two thousand merges. The repository employs Git system for its storage and is hosted on GitHub. It has been created by synthesizing with custom software 24 snapshots of systems developed at Bell Labs, the University of California at Berkeley, and the 386BSD team, two legacy repositories, and the modern repository of the open source FreeBSD system. In total, about one thousand individual contributors are identified, the early ones through primary research. The data set can be used for empirical research in software engineering, information systems, and software archaeology.

  • This is a fascinating find, especially will be of value to students and historians who wish to look back in time to see how UNIX evolved, and in this repo ultimately turned into modern FreeBSD.

Yandex commits improvements to FreeBSD network stack

  • “Rework ip_tryforward() to use FIB4 KPI.”
  • This commit brings some code from the experimental routing branch into head
  • As you can see from the graphs, it offers some sizable improvements in forwarding and firewalled packets per second
  • commit

The brief history of Unix socket multiplexing – select(2) system call

  • Ever wondered about the details of socket multiplexing, aka the history of select(2)?
  • Well Marek today gives a treat, with a quick look back at the history that made today’s modern multiplexing possible.
  • First, his article starts the way all good ones do, presenting the problem in silent-movie form:

In mid-1960’s time sharing was still a recent invention. Compared to a previous paradigm – batch-processing – time sharing was truly revolutionary. It greatly reduced the time wasted between writing a program and getting its result. Batch-processing meant hours and hours of waiting often to only see a program error. See this film to better understand the problems of 1960’s programmers: “The trials and tribulations of batch processing”.

  • Enter the wild world of the 1970’s, and we’ve now reached the birth of UNIX which tried to solve the batch processing problem with time-sharing.

These days when a program was executed, it could “stall” (block) only on a couple of things1:

  • wait for CPU
  • wait for disk I/O
  • wait for user input (waiting for a shell command) or console (printing data too fast)“
  • Jump forward another dozen years or so, and the world changes yet again:

This all changed in 1983 with the release of 4.2BSD. This revision introduced an early implementation of a TCP/IP stack and most importantly – the BSD Sockets API.
Although today we take the BSD sockets API for granted, it wasn’t obvious it was the right API. STREAMS were a competing API design on System V Revision 3.

  • Coming in along with the sockets API was the select(2) call, which our very own Kirk McKusick gives us some background on:

Select was introduced to allow applications to multiplex their I/O.

Consider a simple application like a remote login. It has descriptors for reading from and writing to the terminal and a descriptor for the (bidirectional) socket. It needs to read from the terminal keyboard and write those characters to the socket. It also needs to read from the socket and write to the terminal. Reading from a descriptor that has nothing queued causes the application to block until data arrives. The application does not know whether to read from the terminal or the socket and if it guesses wrong will incorrectly block. So select was added to let it find out which descriptor had data ready to read. If neither, select blocks until data arrives on one descriptor and then awakens telling which descriptor has data to read.

[…] Non-blocking was added at the same time as select. But using non-blocking when reading descriptors does not work well. Do you go into an infinite loop trying to read each of your input descriptors? If not, do you pause after each pass and if so for how long to remain responsive to input? Select is just far more efficient.

Select also lets you create a single inetd daemon rather than having to have a separate daemon for every service.

  • The article then wraps up with an interesting conclusion:
    > CSP = Communicating sequential processes

In this discussion I was afraid to phrase the core question. Were Unix processes intended to be CSP-style processes? Are file descriptors a CSP-derived “channels”? Is select equivalent to ALT statement?

I think: no. Even if there are design similarities, they are accidental. The file-descriptor abstractions were developed well before the original CSP paper.

It seems that an operating socket API’s evolved totally disconnected from the userspace CSP-alike programming paradigms. It’s a pity though. It would be interesting to see an operating system coherent with the programming paradigms of the user land programs.

  • A long (but good) read, and worth your time if you are interested in the history how modern multiplexing came to be.

How to start CLion on FreeBSD?

  • CLion (pronounced “sea lion”) is a cross-platform C and C++ IDE
  • By default, the Linux version comes bundled with some binaries, which obviously won’t work with the native FreeBSD build
  • Rather than using Linux emulation, you can replace these components with native versions
    • pkg install openjdk8 cmake gdb
    • Edit clion-2016.3/bin/idea.properties and change run.processes.with.pty=false
    • Start CLion and open Settings | Build, Execution, Deployment | Toolchains
    • Specify CMake path: /usr/local/bin/cmake and GDB path: /usr/local/bin/gdb
  • Without a replacement for fsnotifier, you will get a warning that the IDE may be slow to detect changes to files on disk
  • But, someone has already written a version of fsnotifier that works on FreeBSD and OpenBSD
  • fsnotifier for OpenBSD and FreeBSD — The fsnotifier is used by IntelliJ for detecting file changes. This version supports FreeBSD and OpenBSD via libinotify and is a replacement for the bundled Linux-only version coming with the IntelliJ IDEA Community Edition.

Beastie Bits


Feedback/Questions


  • Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

The post Sandboxing Cohabitation | BSD Now 170 first appeared on Jupiter Broadcasting.

]]>