How does SSSD interact with tools like kinit?

Many SSSD users know that SSSD supports fail over from one server to another for authentication with services like su or ssh and even autodiscovers the Kerberos servers using DNS records.

But occasionally users would ask - OK, so SSSD lets me log in with another server but I also need to use kinit manually. Does kinit use the same server SSSD used? If so, how does kinit know which KDC SSSD uses?

The SSSD actually has a plugin that is able to tell what KDC or kadmin server to use for a particular realm. When SSSD discovers a Kerberos server, it puts the IP address of that server into a file stored under the /var/lib/sss/pubconf directory. The file that stores the KDC is called kdcinfo.$REALM and the kpasswd file is called kpasswd.$REALM. When SSSD switches to another Kerberos server during a fail over operation, a new IP address is generated in these files. Also, if SSSD goes offline completely, these files are removed, so that tools using libkrb5 only rely on other means of configuration, such as the krb5.conf file.

As noted above, the kdcinfo files are only refreshed during SSSD operation, like user login. This poses a disadvantage for systems that don't perform many operations using the PAM stack, because the server that SSSD discovered might go offline without SSSD triggering a fail over operation. For these environments, it's better to disable the kdcinfo files altogether by setting the krb5_use_kdcinfo option to False and relying on krb5.conf completely. We plan on improving the kdcinfo plugin in future upstream versions so that it plays better with these kind of setups.

The SSSD kdcinfo plugin even has a man page!

Fake DNS replies in unit tests using resolv_wrapper

If your unit tests require custom DNS queries, there are some options you might want to take, like adding records to the local /etc/hosts file. But that might not be possible for tests where you don't have root access (for instance, in build systems) and moreover you can't set any other records except A or AAAA. You can also run a full DNS server and set it into your resolv.conf file, but that normally requires root privileges, too and tampers with the usual setup of the test host. What would be ideal is a way to force the test into a mock DNS environment without affecting the live environment on the host system.

As Andreas Schneider pointed out earlier, it is time for another wrapper - so together with Andreas, we wrote resolv_wrapper! This post will show you how can resolv_wrapper help your testing.

Similar to the other wrappers, the resolv_wrapper provides a preloadable version of library calls. In this case it's res_init, res_query, res_search and res_close. These libresolv (or libc, depending on platform) library calls form the basis of DNS resolution routines like gethostbyname and can also be used to resolve less common DNS queries, such as SRV or SOA. In general, a unit test leveraging resolv_wrapper needs to set up its environment (more on that later), preload the library using LD_PRELOAD and that's it.

If your test environment has its own DNS server (such as Samba or FreeIPA have), resolv_wrapper allows you to redirect DNS traffic to that server by pointing the test to a resolv.conf file that contains IP address of your DNS server:
echo "search" > /tmp/testresolv.conf
echo "nameserver" >> /tmp/testresolv.conf RESOLV_WRAPPER_CONF=/tmp/testresolv.conf ./dns_unit_test

That would make your dns_unit_test perform all DNS queries through your DNS server running at, while your system would be still intact and using the original resolv.conf entries. In some other cases, you might want to test DNS resolution, but maybe you don't want to set up a full DNS server just for the test For this use-case, resolv_wrapper provides the ability to fake DNS replies using a hosts-like text file. Consider a unit test, where you want to make sure that kinit can discover a Kerberos KDC with SRV records. Start by defining the the hosts-like file:
echo "SRV 88" > /tmp/fakehosts
echo "A" >> /tmp/fakehosts

Then export this hosts file using the RESOLV_WRAPPER_HOSTS environment variable and preload the resolv_wrapper as illustrated before: RESOLV_WRAPPER_HOSTS=/tmp/fakehosts ./kinit_unit_test

If something is going wrong, resolv_wrapper allows the user to enable debugging when the RESOLV_WRAPPER_DEBUGLEVEL is set to a numerical value. The highest allowed value, that enabled low-level tracing is 4.

Let's show a complete example with a simple C program that tries to resolve an A record of We'll start with this C source file:
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <stdio.h>

#include <netinet/in.h>
#include <arpa/nameser.h>
#include <arpa/inet.h>
#include <resolv.h>

int main(void)
        int rv;
        struct __res_state dnsstate;
        unsigned char answer[256];
        char addr[64] = { 0 } ;
        ns_msg handle;
        ns_rr rr;

        memset(&dnsstate, 0, sizeof(struct __res_state));
        res_nquery(&dnsstate, "", ns_c_in, ns_t_a, answer, sizeof(answer));

        ns_initparse(answer, sizeof(answer), &handle);
        ns_parserr(&handle, ns_s_an, 0, &rr), 0;
        inet_ntop(AF_INET, ns_rr_rdata(rr), addr, sizeof(addr));

        return 0;

Please note I omitted all error checking to keep the code short.

Compile the file and link it with libresolv:
gcc rwrap_example.c -lresolv -o rwrap_example

And now you can just run the example binary along with resolv_wrapper, using the RESOLV_WRAPPER_DEBUGLEVEL to see the progress: RESOLV_WRAPPER_HOSTS=/tmp/fakehosts RESOLV_WRAPPER_DEBUGLEVEL=4 ./rwrap_example
RWRAP_TRACE(1970) - _rwrap_load_lib_function: Loaded __res_ninit from libc
RWRAP_TRACE(1970) - rwrap_res_nquery: Resolve the domain name [] - class=1, type=1
RWRAP_TRACE(1970) - rwrap_res_nquery:         nameserver:
RWRAP_TRACE(1970) - rwrap_res_nquery:         nameserver:
RWRAP_TRACE(1970) - rwrap_res_nquery:         nameserver:
RWRAP_TRACE(1970) - rwrap_res_fake_hosts: Searching in fake hosts file /tmp/fakehosts
RWRAP_TRACE(1970) - rwrap_res_fake_hosts: Successfully faked answer for []
RWRAP_TRACE(1970) - rwrap_res_nquery: The returned response length is: 0

And that's pretty much it!

The resolv_wrapper lives at the site along with the other wrappers and has its own dedicated page. You can grab the source code from The git tree includes a RST-formatted documentation file, with even more details and example. We're also working on making resolv_wrapper usable on other platforms than Linux, although there are still some bugs here and there.

Add sudo rules to Active Directory and access them with SSSD

Centralizing sudo rules in a centralized identity store such as FreeIPA is usually a good choice for your environment as opposed to copying the sudoers files around - the administrator has one place to edit the sudo rules and the rule set is always up to date. Replication mitigates most of the single-point-of-failure woes and by using modern clients like the SSSD, the rules can also be cached on the client side, making the client resilient against network outages.

What if your identity store is Active Directory though? In this post, I'll show you how to load sudo rules to an AD server and how to configure SSSD to retrieve and cache the rules. A prerequisite is a running AD instance and a Linux client enrolled to the AD instance using tools like realmd or adcli. In this post, I'll use dc=DOMAINNAME,dc=LOCAL as the Windows domain name.

The first step is to load the sudo schema into the AD server. The schema describes the objects sudo uses and their attributes and is not part of standard AD installations. In Fedora, the file describing the schema is part of the SUDO RPM and is located at /usr/share/doc/sudo/schema.ActiveDirectory. You can copy the file to your AD server or download it from the Internet directly.

Next, lauch the Windows command line and load the schema to AD's LDAP server using the ldifde utility:

ldifde -i -f schema.ActiveDirectory -c dc=X dc=DOMAINNAME,dc=LOCAL

Before creating the rule, let's also crate an LDAP container that would store the rules. It's not a good idea to mix sudo rules into the same OU that already stores other objects, like users - a separate OU makes management easier and allows to set more fine-grained permissions. You can create the sudoers OU in "ADSI Edit" quite easily by right-clicking the top-level container (dc=DOMAINNAME,dc=LOCAL), selecting "New->Object". In the dialog that opens, select "organizationalUnit", click "Next" and finally name the new OU "sudoers". If you select a different name or a different OU altogether, you'll have to set a custom ldap_sudo_search_base in sssd.conf, the default is ou=sudoers,$BASE_DN".

Now, let's add the rule itself. For illustration purposes, we'll allow the user called 'jdoe' to execute less on all Linux clients in the enterprise.

In my test, I used "ADSI Edit" again. Just right-click the SUDO container, select "New->Object" and then you should see sudoRole in the list of objectClasses. Create the rule based on the syntax described in the sudoers.ldap man page; as an example, I created a rule that allows the user called "jdoe" to run less, for instance to be able to inspect system log files.

dn: CN=lessrule,OU=sudoers,DC=DOMAINNAME,DC=LOCAL
objectClass: top
objectClass: sudoRole
cn: lessrule
distinguishedName: CN=lessrule,OU=sudoers,DC=DOMAINNAME,DC=LOCAL
name: lessrule
sudoHost: ALL
sudoCommand: /usr/bin/less
sudoUser: jdoe

The username of the user who is allowed to execute the rule is stored in the sudoUser attribute. Please note that the username must be stored non-qualified, which is different from the usual username@DOMAIN (or DOM\username) syntax used in Windows.For a more detailed description of how the sudo rules in LDAP work, refer to the sudoers.ldap manual page.

The client configuration involves minor modifications to two configuration files. First, edit /etc/nsswitch.conf and append 'sss' to the 'sudoers:' database configuration:

sudoers: files sss

If the sudoers database was not present in nsswitch.conf at all, just add the line as above. This modification would allow SSSD to communicate with the sssd with the libsss_sudo library.
Finally, open the /etc/sssd/sssd.conf file and edit the [sssd] section to include the sudo service:

services = nss, pam, sudo

Then just restart sssd and the setup is done! For testing, log in as the user in question ("jdoe" here) and run:

sudo -l

You should be able to see something like this in the output:
User jdoe may run the following commands on adclient:
(lcl) /usr/bin/less

That's it! Now you can use your AD server as an centralized sudo rules storage and the rules are cached and available offline with the SSSD.

COPR repos with newer SSSD versions for RHEL-5 and RHEL-6

Two interesting COPR repos with SSSD packages were made available recently.

One was prepared by Stephen Gallagher and contains SSSD 1.9.x for RHEL-5. I created the other one with SSSD 1.11 built for RHEL-6. I'd love to see test reports for the RHEL-6 repo, as we are considering upgrading to 1.11 in RHEL-6.6.

For more details on the repos see the announcements on sssd-devel about both Stephen's and mine repos.

Enrolling an Active Directory RHEL-6 client machine using adcli

If you're adding a modern Linux client to an Active Directory domain, you really should be using realmd. It's easy to use, secure and does the right thing by default.

If you haven't heard about realmd already, check out the documentation. In a nutshell, realmd makes the client enrollment as easy as:

# realm join

However, realmd depends on some software that is not available on stable platforms used in production, like RHEL-6 and its derivatives. Still, it's possible to use some of the components realmd builds on separately and have a reasonably user-friendly experience. In this blog post, I'll show you how, using a package called adcli, that is usually just a building block of realmd.

My test AD server domain is called and the server that runs the domain is called For the test, I've used a mostly default CentOS 6.5 VM.

Typically, you'll want to point your Linux machine to the AD server for DNS:

# cat /etc/resolv.conf
# host domain name pointer

Start the setup by enabling the EPEL repository and installing the 'adcli' package:

# yum install adcli

You can type just 'adcli' to get an overview of what commands adcli supports. We're interested in joining the client to the AD domain in order to be able to log in as users from Active Directory.

Now you should be able to find your domain already:

# adcli info
domain-name = WIN.EXAMPLE.COM
domain-short = WIN
domain-forest = WIN.EXAMPLE.COM
domain-controller = SERVER.WIN.EXAMPLE.COM
domain-controller-site = Default-First-Site-Name
domain-controller-flags = pdc gc ldap ds kdc timeserv closest writable good-timeserv full-secret ads-web
domain-controller-usable = yes
domain-controllers = SERVER.WIN.EXAMPLE.COM
computer-site = Default-First-Site-Name

As you can see, adcli was able to discover quite a few details about my test domain, so it's time to join the client:

# adcli join
Password for Administrator@WIN.EXAMPLE.COM:

You'll be prompted for the Administrator password by default, but it's possible to specify another user with the -U option. See the adcli man page for full list of details.

The join operation creates a keytab the machine will authenticate with. When you inspect the with klist -kt, you should see several entries that contain you client hostname in some form. Here are the keytab contents on my test system:

# klist -k | head
Keytab name: FILE:/etc/krb5.keytab
KVNO Principal
---- --------------------------------------------------------------------------

It's recommeded to also configure /etc/krb5.conf to use the AD domain:

default_realm = WIN.EXAMPLE.COM
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true

kdc =
admin_server =


The final step is setting up the SSSD (or Winbind if you like) to actually make use of the keytab to resolve users. I'll show how to use the AD back end of SSSD as an example. Make sure sssd and authconfig are installed:

# yum install authconfig sssd

Unfortunately, authconfig in RHEL-6 doesn't support configuring the AD back end directly, so you'll have to do a bit of manual configuration. We can still use authconfig to set up the Name Service Switch and PAM stacks:

# authconfig --enablesssd --enablesssdauth --update

Now you should see 'sss' being present in /etc/nsswitch.conf and the pam stack configuration:

# grep sss /etc/nsswitch.conf
passwd: files sss
shadow: files sss
group: files sss
services: files sss
netgroup: files sss

The final step is to configure the SSSD itself. Open /etc/sssd/sssd.conf and define a single domain:

services = nss, pam, ssh, autofs
config_file_version = 2

id_provider = ad
# Uncomment if service discovery is not working
# ad_server =

Start the SSSD and make sure it's up after reboots:

# service sssd start
# chkconfig sssd on

You should now be able to log in as an AD user just fine:

su -
-sh-4.1$ id
uid=388000500(administrator) gid=388000513(domain users) groups=388000513(domain users),388000512(domain admins),388000518(schema admins),388000519(enterprise admins),388000520(group policy creator owners),388000572(denied rodc password replication group),388001123(supergroup) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

For more information on the SSSD ad provider, see the SSSD wiki page or just sssd-ad the manual page.

Why is id so slow with SSSD?

Every once in a while, when debugging an SSSD performance problem for a user or a customer, I see that even experienced users tend to measure login performance by running id(1). That's not really the best thing to do, for one reason - running the plain id command does much more than what happens during login, and of course, at a cost.

Typically, the login program such as ssh or gdm, needs to perform a couple of tasks aside verifying the user's credentials. These include finding out if the user exists, what shell and home directory does he have and what groups the user is a member of, so that the user can access the files he should be allowed to (and vice versa). These tasks boil down to two corresponding glibc calls - getpwnam to find the details about the user, and getgrouplist to retrieve the list of groups he is a member of.

Because these two library functions are used so often, we take special care in the SSSD to make sure that we use any optimization that is available, such as the transitive memberOf attribute when IPA is used or the tokenGroup attribute when the AD provider is configured. In order to measure what the performance of Name Service Switch calls the login program does is like, you can call "id -G $username" from the command line.

Notice the extra "-G" switch. That really makes a bit of a difference, because the getgrouplist operation returns a list of numerical IDs the user is a member of. That's usually good enough for the login programs to set the groups for the user who logs in but not really friendly for the admin inspecting the output of the id command.

In contrast to "id -G $username", "id $username" does one operation that sounds trivial but can be extremely expensive - resolves the group GIDs to group names. While that sounds like a really easy operation, it involves calling getgrgid for each of the GIDs returned by getgroupslist. And there comes the slowdown, the getgrgid operation is kind of an all-or-nothing call. It retrieves not only the information about the group itself, such as its name, but also all information about the members if the group, including all the users. This can get quite expensive, consider a university setup where each student was a member of group "students" that consisted of all students on the university.

A legitimate question might be whether its possible to restrict the amount of information the get-by-GID call retrieves. And the answer is both yes and no - the POSIX interface doesn't allow any such query directly, but the SSSD offers several means to speed up the overall processing. One quite recent addition is the ignore_group_members configuration option that was contributed by Paul Henson. Setting this option to True causes all groups to effectivelly appear empty, avoiding the need to download the members. Keep in mind that with many server implementations, the members might also include other nested groups which causes the whole operation to recurse.

The SSSD Active Directory provider, part 2

The previous post gave a high level overview of what new features are present in the AD provider starting with 1.9 and especially in the latest versions. This post will illustrate these AD specific features in more detail. Because the AD backend is still undergoing rapid development, most features are accompanied with the version it appeared in.

Faster logins

One of very common complaints about using the LDAP provider with SSSD was that logins are too slow. Typically this was the case with very large and nested group memberships the user was a member of, as the SSSD previously crawled the LDAP directory, looking up the groups.

The AD provider is able to take advantage of a special attribute present in Active Directory called tokenGroups to read all the groups is a member of in a single call. This performance enhancement can reduce the number of LDAP calls needed to find the group memberships to a single one, drastically improving the login time.

The tokenGroups attribute is only leveraged if the SSSD maps the ID values from SIDs, not when POSIX attributes are used in the older versions, up to 1.11.3. With 1.11.3 or later, the tokenGroups attribute is leveraged even when POSIX attributes are used instead of automatic mapping.

Dynamic DNS updates

Clients enrolled to an Active Directory domain may be allowed to update their DNS records stored in AD dynamically. At the same time, Active Directory servers support DNS aging and scavenging, which means that stale DNS records might be removed from AD after a period of inactivity.

The AD provider supports both scenarios described above by default. The AD provider will attempt to update the DNS record every time the AD provider goes online (typically after startup) and periodically every 24 hours to keep the records from being scavenged.

Dynamic NetBIOS name discovery

When referring to a user from an Active Directory domain, typically the domain is part of the identifier. In contrast to UNIX users that are normally just referred to with the username:
id root

The AD users can be referred to either by fully qualifiying the name with the AD domain:

Or just using the NetBIOS name of the AD domain:
id AD\\Administrator

In most deployments, the NetBIOS name is just the first part of the full domain name, but not always. The NetBIOS name can be customized on the AD side by the administrator.The AD provider of SSSD is able to recognize both formats and autodetect the right NetBIOS name as well.

DNS site discovery

Large Active Directory environments often span across multiple locations in multiple geographies. In Active Directory, these physical locations are represented as "sites". Each client belongs to a site based on the network subnet it resides in. For best performance, it is important that the clients are able to find the closest site and use the other domain controllers only as a fallback.

The support for discovering the closes site was added in SSSD 1.10 and is enabled by default.

Support for trusted domains in the same forest

Starting with version 1.10, the SSSD is able to dynamically find the trusted domains in the same forest and provide both authentication and identity information for users coming from the trusted domains. The SSSD retrieves identity information from the Global Catalog, so it's important that the users and all needed attributes are replicated to the Global Catalog. This includes even POSIX attributes such as home directory, login shell and most importantly UIDs and GIDs if not using ID mapping.

Prior to 1.10, it was somewhat possible to configure the SSSD to fetch identity data from trusted domain, but the administrator had to represent each domain with a separate [domain] stanza in the config file. Each domain stanza had to be fully configured as a separate identity source including search bases and host names. Moreover, groups could only contain members from the same domain. The native support also requires no configuration at all, the trusted domains are discovered on the fly.

Support for enterprise logins

Some users in AD might use a different Kerberos Principal suffix than the default one. This feature is on by default in the AD provider and was introduced in SSSD 1.10. This feature is also required to support logins of users from trusted domains.

A simplified LDAP access control mechanism

Starting with upstream version 1.11.2, there is a simplified way to express the access control using an LDAP filter with the AD backend. The administrator can now only specify access_provider=ad and then use the
access filter with an option aptly called ad_access_filter.

The following example illustrated restricting access to users whose name begins with 'jo' and have a valid home directory attribute:
access_provider = ad
ad_access_filter = (&(sAMAccountName=jo*)(unixHomeDirectory=*))

In addition to checking whether the user matches the filter, the AD access filter also checks the account validity. Expired accounts are not permitted even if they matched the filter. Previously, the administrator had to configure the LDAP access filter and specify all the options manually.

The example used above would the look much more involved:
access_provider = ldap
ldap_access_order = filter, expire
ldap_account_expire_policy = ad
ldap_access_filter = (&(sAMAccountName=jo*)(unixHomeDirectory=*))
ldap_sasl_mech = GSSAPI
ldap_schema = ad

Still, for most deployments, the simple access provider is the best choice for the ease of configuration, especially when it comes to group membership.

The SSSD Active Directory provider

This post intends to introduce a feature of SSSD, that, despite being around since the release of 1.9, is still not used as often as it should - the Active Directory backend.

Even though 1.9 was released more than a year ago, I still see many deployments configuring the pure LDAP backend when configuring an AD client machine. I'll try to explain the advantages of the AD backend compared to the LDAP backend, but in short, you should always use the AD backend when configuring SSSD with an AD server.

Below is a summary of the biggest advantages of the AD provider in my opinion. A more detailed description of the new features will follow in a next blog post.

The enrollment using realmd is easier and more secure

This is technically not a feature of the AD backend, but it's still worth noting. There are several ways to enroll a Linux client machine to AD - generate a keytab on Windows, use Samba, etc. All of them require some amount of knowledge and manual tweaking - refer to the SSSD wiki page for details. In contrast, realmd is a great tool written by a friendly upstream that reduces all that effort into a single line of shell command:

# realm join

You'd be queried for AD Administrator credentials by default, but any user authorized to join a client to the realm can be used. Realmd also supports one-time passwords and more. After the join finishes, the client machine will run the SSSD with the AD provider configured by default, but winbind is also available, if you prefer that option. Realmd is included in the last couple of Fedora releases, starting with Fedora 18. If you are running an older distribution, such as RHEL6, I'd advise to use the underlying client tool called adcli, which is available from EPEL.

The configuration is simpler

Even if you opt for manual configuration or have a client already joined to a domain, the simplified configuration might be of interest. While AD can be treated just as an LDAP/Kerberos combo, several configuration options need to be tailored in order to match what is stored on the server side.

For example, user objects in AD are of objectclass user. In contrast, the LDAP provider defaults to posixAccount. Active Directory is also case insensitive, which requires the use of the "case_sensitive" option. The AD provider already comes with all the defaults set out of the box, so the previously complex configuration can be simplified to:

id_provider = ad

The example above assumes that UIDs and GIDs are mapped automatically by the SSSD and the AD servers are autodiscovered from DNS.

AD specific features included

In comparison to using the generic LDAP and Kerberos providers, the AD provider allows the client machine to use several features unique to the AD backend. In particular:
  • logins are faster as the AD provider can leverage the special tokenGroups feature
  • the client machine is able to update or refresh its DNS records
  • the NetBIOS domain name can be autodiscovered and used in both lookups and output format (getent passwd AD\\Administrator now works)
  • clients are able to automatically discover the closest AD server to connect to using the 'sites' feature of AD
  • the AD provider automatically discovers trusted domains in the same forest, allowing all users from the same forest to log in to the machine
  • expressing access control with an LDAP filter was made much simpler with a new configuration option
  • custom UPN suffixes, also known as Enterprise Principals are supported by default
The next post will illustrate these AD specific features in more detail.

How to cache automounter maps using SSSD

How to cache automounter maps using SSSD

In Fedora 17, we are introducing a new feature - the SSSD gets an ability to cache automounter maps and map entries stored in a remote database the SSSD can access, which is mostly LDAP. Because there is no user-facing documentation available to describe how this feature works, I decided to introduce it in a little more detail in this blog post. The post is quite verbose as it explains the native LDAP case as well - if you are familiar with automounter and how automounter is configured to access map in LDAP, feel free to skip to the last section.

A brief introduction to automounter

The automounter is a very useful software that lets the user access removable media or network shares without explicitly mounting them and unmounts them when they are not needed. The user would simply access the directory where the remote file system is located and the automounter then takes care of mounting the correct share with the desired options. Obviously, the automounter needs to know which share should be mounted from which location. The configuration files for autofs are called maps and are similar to /etc/fstab at a high level. The main, top-level map is called the master map and is typically located in /etc/auto.master. There's quite a lot of resources on automounter on the Internet, the reader may continue to the Red Hat Documentation, for example.

A hands-on example: automounter configured with flat files

To illustrate the capabilities of the automounter, here is a simple example. Consider there's an NFS file server on the network called that exports a share called pub. We'd like to configure the system to automatically mount this share at /shares/pub. First, we'll include an entry for the /shares directory in the master map. Edit the file /etc/auto.master to include a nested map for pub:

/shares/ /etc/auto.shares

Then define the mount point pub in the file referenced from the master map, called /etc/auto.shares:

pub -fstype=nfs
The three fields are pretty much self-explanatory. The first is the name of the mount point, the second are the options passed to the automounter daemon and the last is the filesystem that should be mounted, a NFS server:share in our case.

We're nearly done - the last step is to start the automounter daemon and create the top-level directory:

mkdir /shares
service autofs start

Entering the directory /shares/pub should now automatically mount the remote filesystem.

Storing automount maps in LDAP

When the maps are stored in files, the administrator faces a problem on a large network - he needs to distribute the files to all the hosts that he manages on the network. One possible solution is to use a tool such as puppet or cfengine to help distributing the files. Another solution is to get rid of the files altogether and fetch the maps from a centralized directory, LDAP in particular. The automounter then only needs to know the location to download the maps from.

A hands-on example: automounter configured to access maps in LDAP

We'll define the same share mounted at the same location as we did inthe previous example, just not using the automounter native syntaxbut rather the LDIF, which can be loaded in LDAP. I assume the reader is familiar with LDAP and utilities to manage an LDAP directory. In the following examples, we will be considering an LDAP server running at machine with host name with a base DN dc=example,dc=com and using the RFC2307bis schema. Before proceeding with the example, remove the /shares entry from /etc/auto.master so that the automounter does not load the files-based map for this particular mount point anymore. The first step is creating the container all the maps will reside in:

dn: cn=automount,dc=example,dc=com
objectClass: nsContainer
objectClass: top
cn: automount

We can now load the auto.master map into the container:

dn: automountMapName=auto.master,cn=automount,dc=example,dc=com
objectClass: automountMap
objectClass: top
automountMapName: auto.master

The auto.master map is going to be linked to the auto.shares map, as was the case with the files based maps. The map itself is an object of object class automountMap, the "link" that contains the information about the mount or share itself is of object class automount. The map looks quite similar to auto.master map:

dn: automountMapName=auto.shares,cn=automount,dc=example,dc=com
objectClass: automountMap
objectClass: top
automountMapName: auto.shares

Here is an example of the automount object that establishes the link between the auto.shares map and the /shares mount

dn: automountKey=/shares,automountMapName=auto.master,cn=automount,dc=example,dc=com
objectClass: automount
objectClass: top
automountKey: /shares
automountInformation: auto.shares

The last and final object is the mount point pub itself. This automount object is going to include the NFS share information in the automountInformation attribute.

dn: automountKey=pub,automountMapName=auto.shares,cn=automount,dc=example,dc=com
objectClass: automount
objectClass: top
automountKey: pub
description: pub

Load the database into the server with a tool such as ldapadd and the server side setup is ready. The second part of our setup is configuring the automounter client daemon. This example is going to illustrate the native LDAP support, the following section is going to show the new SSSD integration. The client configuration touches two files - one is /etc/sysconfig/autofs, which  contains all the options passed to the autofs daemon. In order to fetch data from the LDAP server, we need to specify the LDAP server URI, search base and the schema used:

# The schema attributes are present in the config file, just uncomment them

The second file that needs amending is /etc/nsswitch.conf. This file is supposed to contain the Name Service Switch map sources, but is frequently used by third party programs such as sudo or the automounter to specify their data sources as well. The automounter parses only a single line from the file - the one that starts with automount:. The default value should be files, changing it to files ldap would tell the automounter to try mounting from the files-based maps first, then try the LDAP-based maps. Change the line to read files ldap and restart the automounter. Accessing /shares/pub should mount it from the central location just like it did with flat files.

Accessing the LDAP maps using the SSSD

Storing the automounter maps in LDAP has the advantage or centralizing the data on one place but also brings the disadvantage of relying on network connectivity for downloading the maps. If there is an network outage or the LDAP server goes down, the clients will not be able to mount the shares. The SSSD is able to cache the automount maps in its persistent on-disk cache, allowing offline operation even when the LDAP server is unreachable. Apart from the offline access, the other benefits of using the SSSD for automounter lookups are:

  • unified configuration of LDAP parameters such as the servers used, timeout options and security properties at one places (sssd.conf)

  • autofs would take advantage of the advanced features SSSD has such as server fail over, server discovery using DNS SRV lookups and more

  • only one connection to the LDAP server open at a time resulting in less load on the LDAP server and better performance

  • caching of the data - again, less load on the LDAP server and better performance on the client side as the client wouldn't have to go to the server with each request
Please note that the caching only applies to the mount information, not the data itself!

A hands-on example: migrating the client configuration to use the SSSD

The logic to access data sources in the automounter is implemented in lookup modules. The sss lookup module, which is part of autofs itself, communicates with a small client library that is included in the libsss_autofs package. Make sure this package is installed before proceeding. The last example will show you how to configure the client setup to perform mounts using the SSSD lookup module. You will need to have the SSSD client package libsss_autofs installed along with an autofs version that contains the SSS lookup module. In Fedora 17, that means autofs-5.0.6-11 or newer.

In case your environment is already using the SSSD to perform user and group lookups with the SSSD, there's only a couple changes that need to be done:
  1. Create a new [autofs] section in the /etc/sssd/sssd.conf config file. Leaving it empty will use the defaults which should be good enough for most deployments. The sssd.conf manual page contains the single currently available option.
  2. The services parameter in the [sssd] section must be amended to include the autofs service as well:
    services = nss,pam,autofs
  3. The last parameter in the sssd.conf file that might optionally need amending is the search base. The default is to use the ldap_search_base parameter but you can also specify the search base using the ldap_autofs_search_base option:
  4. The last step during the client configuration is to tell autofs to contact the SSSD for automounter maps. This is done in by changing the automount: line in the /etc/nsswitch.conf config file to say sss instead of ldap:
    automount: files sss
  5. Restart the SSSD:
    service sssd restart
The schema and in turn the attribute names that are being used for the LDAP searches by the SSSD are set to default values depending on the schema that the SSSD uses. The prevalent schema for storing the autofs maps is RFC2307bis, while the default LDAP schema of the SSSD is RFC2307. The LDAP provider of the SSSD has a couple of options that can be used to override the attribute names. The following example illustrates using the RFC2307bis-style attribute names in the RFC2307 schema:


The autofs-SSSD integration is quite new feature, so there inevitably are bugs. In particular, be sure to run a package that fixes the bad key length calculation. If you think you found another bug, please find us on the #sssd channel on Freenode, we'll be glad to help you debug your problem or relay the question to Ian Kent, who wrote the sss automounter module inside autofs and maintains the autofs package in Fedora and RHEL.