Trying the “btrfs” file system

There has been some urging for beta testers to try out “btrfs”.  So I did.  I tried it on one of my 13.1Beta1 installs.  I would have tried it on two installs, except that the UEFI install had already given problems before I got to that  point.

The request for beta testers was that this was the way to find any problematic bugs, and also the way to build a community of users who can advise others when they try.

What’s btrfs?

This is a relatively new file system for linux.  It has been around for several years, but was mainly experimental.  While not yet fully implemented as planned, current reports are that it is close and is stable, so should be a file system to consider.

The name stands for “b-tree file system” or “better file system” depending on whom you ask.  As the first of those suggests, the file system is implemented as a b-tree data structure on disk.  The directory is, in effect, an index of entries in the b-tree data stucture.

One of the “features” is that you can take snapshots.  A snapshot is, in effect, an alternate index, which represents the file system as if frozen at an instant in time.  One use of this is to then do a backup of the snapshot, which presumably is not affected by the addition and deletion of files during the process of taking the backup.

When you remove a file, that removes the entry in the main directory.  But there might still be an entry in one of the existing snapshots.  In principle, this allows the recovery of a removed file.  The cost, however, is that the space for that removed file has not been freed, and won’t be freed until the snapshot is deleted.

A background daemon is supposed to delete sufficiently old snapshots.  But you might have to manually delete some if you are running short of space on the file system.

My test of btrfs

For my testing, I used “btrfs” for the root file system of a 13.1B1 install.  I left the “home” partition as “ext4”, because I did not want to backup, format and restore that.

So here’s the status of the root file system:

Filesystem             1K-blocks     Used Available Use% Mounted on
/dev/sda9       20971520 18215000   1817408  91% /

For comparison, here is similar information on another 13.1B1 install, where I used “ext4” rather than “btrfs”.  On both systems, I  installed pretty much the same software (KDE, Gnome, XFCE, LXDE).

Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/sdb4       20642428 7029844  12564008  36% /

As you can see, there is quite a difference in the amount of space used.

The difference is not entirely due to using “btrfs”.  On the first of those two systems, I ran updates, which updated just about everything.  I recently posted about that update.

The updating accounts for some of the extra space.  The updated software took a few hundred meg more space than what was originally installed.  It created logs, and it added entries to the rpm database.  But that falls far short of explaining the difference in space used.  The remainder is partly due to copies of deleted files that are still referenced by automatically taken snapshots.  And it is partly due to the additional overhead (meta-data) for “btrfs”.

Performance issues

It is hard to compare these two systems, for the one with “btrfs” is an older computer with a slower processor.  The system with “btrfs” seems to be slower on startup and shutdown, compared to opensuse 12.3 with “ext4” on the same computer.  But it is hard to judge.  Normally, one does not startup and shutdown all that often, so even it “btrfs” makes that a bit slower, it isn’t something that would concern me.

I’m a bit more troubled by what I noticed today.  I booted up the system.  And, after login, its performance seemed very sluggish.  I ran “uptime” in a terminal, and that showed the load average was around 3.0, instead of the .10 that I might have expected.  Checking with the “top” command, I could see that “snapperd” was busy, and was running other “btrfs” utilities.

The slowdown continued for several minutes, after which it reverted to performing as it usually does.

Once again, I’m not sure that this is a cause for concern.  If I were using “btrfs” on my regular desktop, those utility commands would probably run overnight while I am asleep.  They ran after power on for the test system, because I don’t keep that running at all times so there was a backlog of system maintenance tasks.

My future plans

At this stage, I am ambivalent on whether to use “btrfs”.

The first decision is easy.  When I go to 13.1RC1 on that test machine, I will revert to using “ext4”.  I don’t like seeing the root file system at 91% of capacity.

The harder decision will be on my regular desktop.  At present, the root file system has 40G of space allocated, and only 10G used.  So there is enough leeway for the additional overhead of “btrfs”.  And I am not too worried about the performance issue.  I don’t actually see much likelihood that I would make use of the ability to recover accidentally removed files.  But I suppose that if I decide to go with Tumbleweed, I could make a practice of taking a snapshot before a major Tumbleweed update, allowing the possibility of reversing that update by restoring to the snapshot.

I still have a month to make up my mind.

Advertisements

Tags:

About Neil Rickert

Retired mathematician and computer scientist who dabbles in cognitive science.

3 responses to “Trying the “btrfs” file system”

  1. leftycrupps says :

    I had never considered, but does snapshotting allow for, basically, a System Restore that GNU/Linux has never really had? That’d be too slick, it should be in the /boot/ partition as a GRUB utility 🙂

    Regarding your drive comparisons, tho: does the btrfs have snapshots that you don’t really need, and that’s why you’re close to 91% ? Perhaps there’s some cleanup to be done?

    Like

    • Neil Rickert says :

      I had never considered, but does snapshotting allow for, basically, a System Restore that GNU/Linux has never really had?

      I think so, though until I try it, I won’t know for sure.

      Regarding your drive comparisons, tho: does the btrfs have snapshots that you don’t really need, and that’s why you’re close to 91% ?

      Yes, that is the most likely explanation. When an update replaced 2647 packages, that possibly doubled the size of the used space, due to shapshots. There’s a “snapper” command that I can use to delete unwanted snapshots, but I haven’t tried it yet. When space shortage becomes dire, I’ll try that.

      There is a recommendation somewhere in the doc, that a larger partition size should be used for btrfs, to allow for the space taken by snapshots.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: