Книга администратора Debian



Pdf просмотр
страница16/32
Дата13.11.2016
Размер7.63 Mb.
Просмотров5084
Скачиваний0
ТипРеферат
1   ...   12   13   14   15   16   17   18   19   ...   32

file
GOING FURTHER umask
When an application creates a file, it assigns indicative permissions, knowing that the system automatically removes certain rights, given by the command umask. Enter umask in a shell; you will see a mask such as
0022
. This is simply an octal representation of the rights to be systematically removed (in this case, the write right for the group and other users).
If you give it a new octal value, the umask command modifies the mask. Used in a shell initialization file (for example,
/.bash_profile
), it will effectively change the default mask for your work sessions.

9.4. Administration Interfaces
Using a graphical interface for administration is interesting in various circumstances. An administrator does not necessarily know all the configuration details for all their services, and doesn't always have the time to go seeking out the documentation on the matter. A graphical interface for administration can thus accelerate the deployment of a new service. It can also simplify the setup of services which are hard to configure.
Such an interface is only an aid, and not an end in itself. In all cases, the administrator must master its behavior in order to understand and work around any potential problem.
Since no interface is perfect, you may be tempted to try several solutions. This is to be avoided as much as possible, since different tools are sometimes incompatible in their work methods.
Even if they all aim to be very flexible and try to adopt the configuration file as a single reference, they are not always able to integrate external changes.
9.4.1. Administrating on a Web Interface: webmin
This is, without a doubt, one of the most successful administration interfaces. It is a modular system managed through a web browser, covering a wide array of areas and tools. Furthermore,
it is internationalized and available in many languages.
Sadly, webmin is no longer part of Debian. Its Debian maintainer — Jaldhar H. Vyas —
removed the packages he created because he no longer had the time required to maintain them at an acceptable quality level. Nobody has officially taken over, so Jessie does not have the
webmin package.
There is, however, an unofficial package distributed on the webmin.com website. Contrary to the original Debian packages, this package is monolithic; all of its configuration modules are installed and activated by default, even if the corresponding service is not installed on the machine.
SECURITY Changing the root password
On the first login, identification is conducted with the root username and its usual password. It is recommended to change the password used for webmin as soon as possible, so that if it is compromised, the root password for the server will not be involved, even if this confers important administrative rights to the machine.
Beware! Since webmin has so many features, a malicious user accessing it could compromise the security of the entire system. In general, interfaces of this kind are not recommended for important systems with strong security constraints
(firewall, sensitive servers, etc.).
Webmin is used through a web interface, but it does not require Apache to be installed.
Essentially, this software has its own integrated mini web server. This server listens by default on port 10000 and accepts secure HTTP connections.

Included modules cover a wide variety of services, among which:
all base services: creation of users and groups, management of crontab files, init scripts,
viewing of logs, etc.
bind: DNS server configuration (name service);
postfix: SMTP server configuration (e-mail);
inetd: configuration of the inetd super-server;
quota: user quota management;
dhcpd: DHCP server configuration;
proftpd: FTP server configuration;
samba: Samba file server configuration;
software: installation or removal of software from Debian packages and system updates.
The administration interface is available in a web browser at https://localhost:10000
Beware! Not all the modules are directly usable. Sometimes they must be configured by specifying the locations of the corresponding configuration files and some executable files
(program). Frequently the system will politely prompt you when it fails to activate a requested module.
ALTERNATIVE GNOME control center
The GNOME project also provides multiple administration interfaces that are usually accessible via the “Settings” entry in the user menu on the top right. gnome-control-center is the main program that brings them all together but many of the system wide configuration tools are effectively provided by other packages (accountsservice, system-config-printer, etc.). Although they are easy to use, these applications cover only a limited number of base services: user management, time configuration,
network configuration, printer configuration, and so on.
9.4.2. Configuring Packages: debconf
Many packages are automatically configured after asking a few questions during installation through the Debconf tool. These packages can be reconfigured by running dpkg-reconfigure
package
For most cases, these settings are very simple; only a few important variables in the configuration file are changed. These variables are often grouped between two “demarcation”
lines so that reconfiguration of the package only impacts the enclosed area. In other cases,
reconfiguration will not change anything if the script detects a manual modification of the configuration file, in order to preserve these human interventions (because the script can't ensure that its own modifications will not disrupt the existing settings).
DEBIAN POLICY Preserving changes
The Debian Policy expressly stipulates that everything should be done to preserve manual changes made to a configuration file, so more and more scripts take precautions when editing configuration files. The general principle is simple: the script will only make changes if it knows the status of the configuration file, which is verified by comparing the checksum of the file against that of the last automatically generated file. If they are the same, the script is authorized to change the configuration file. Otherwise, it determines that the file has been changed and asks what action it should take (install the new file, save the
old file, or try to integrate the new changes with the existing file). This precautionary principle has long been unique to Debian,
but other distributions have gradually begun to embrace it.
The ucf program (from the Debian package of the same name) can be used to implement such a behavior.

9.5. syslog System Events
9.5.1. Principle and Mechanism
The rsyslogd daemon is responsible for collecting service messages coming from applications and the kernel, then dispatching them into log files (usually stored in the
/var/log/
directory).
It obeys the
/etc/rsyslog.conf configuration file.
Each log message is associated with an application subsystem (called “facility” in the documentation):
auth and authpriv
: for authentication;
cron
: comes from task scheduling services, cron and atd;
daemon
: affects a daemon without any special classification (DNS, NTP, etc.);
ftp
: concerns the FTP server;
kern
: message coming from the kernel;
lpr
: comes from the printing subsystem;
mail
: comes from the e-mail subsystem;
news
: Usenet subsystem message (especially from an NNTP — Network News Transfer
Protocol — server that manages newsgroups);
syslog
: messages from the syslogd server, itself;
user
: user messages (generic);
uucp
: messages from the UUCP server (Unix to Unix Copy Program, an old protocol notably used to distribute e-mail messages);
local0
to local7
: reserved for local use.
Each message is also associated with a priority level. Here is the list in decreasing order:
emerg
: “Help!” There is an emergency, the system is probably unusable.
alert
: hurry up, any delay can be dangerous, action must be taken immediately;
crit
: conditions are critical;
err
: error;
warn
: warning (potential error);
notice
: conditions are normal, but the message is important;
info
: informative message;
debug
: debugging message.
9.5.2. The Configuration File
The syntax of the
/etc/rsyslog.conf file is detailed in the rsyslog.conf(5) manual page, but there is also HTML documentation available in the rsyslog-doc package

(
/usr/share/doc/rsyslog-doc/html/index.html
). The overall principle is to write
“selector” and “action” pairs. The selector defines all relevant messages, and the actions describes how to deal with them.
9.5.2.1. Syntax of the Selector
The selector is a semicolon-separated list of
subsystem.priority
pairs (example:
auth.notice;mail.info
). An asterisk may represent all subsystems or all priorities
(examples:
*.alert or mail.*
). Several subsystems can be grouped, by separating them with a comma (example: auth,mail.info
). The priority indicated also covers messages of equal or higher priority; thus auth.alert indicates the auth subsystem messages of alert or emerg priority. Prefixed with an exclamation point (!), it indicates the opposite, in other words the strictly lower priorities; auth.!notice
, thus, indicates messages issued from auth
, with info or debug priority. Prefixed with an equal sign (=), it corresponds to precisely and only the priority indicated (
auth.=notice only concerns messages from auth with notice priority).
Each element in the list on the selector overrides previous elements. It is thus possible to restrict a set or to exclude certain elements from it. For example, kern.info;kern.!err means messages from the kernel with priority between info and warn
. The none priority indicates the empty set (no priorities), and may serve to exclude a subsystem from a set of messages. Thus,
*.crit;kern.none indicates all the messages of priority equal to or higher than crit not coming from the kernel.
9.5.2.2. Syntax of Actions
BACK TO BASICS The named pipe, a persistent pipe
A named pipe is a particular type of file that operates like a traditional pipe (the pipe that you make with the “|” symbol on the command line), but via a file. This mechanism has the advantage of being able to relate two unrelated processes. Anything written to a named pipe blocks the process that writes until another process attempts to read the data written. This second process reads the data written by the first, which can then resume execution.
Such a file is created with the mkfifo command.
The various possible actions are:
add the message to a file (example:
/var/log/messages
);
send the message to a remote syslog server (example:
@log.falcot.com
);
send the message to an existing named pipe (example:
|/dev/xconsole
);
send the message to one or more users, if they are logged in (example: root,rhertzog
);
send the message to all logged in users (example:
*
);
write the message in a text console (example:
/dev/tty8
).
SECURITY Forwarding logs
It is a good idea to record the most important logs on a separate machine (perhaps dedicated for this purpose), since this will prevent any possible intruder from removing traces of their intrusion (unless, of course, they also compromise this other
server). Furthermore, in the event of a major problem (such as a kernel crash), you have the logs available on another machine, which increases your chances of determining the sequence of events that caused the crash.
To accept log messages sent by other machines, you must reconfigure rsyslog: in practice, it is sufficient to activate the ready-for-use entries in
/etc/rsyslog.conf
(
$ModLoad imudp and
$UDPServerRun 514
).

9.6. The inetd Super-Server
Inetd (often called “Internet super-server”) is a server of servers. It executes rarely used servers on demand, so that they do not have to run continuously.
The
/etc/inetd.conf file lists these servers and their usual ports. The inetd command listens to all of them; when it detects a connection to any such port, it executes the corresponding server program.
DEBIAN POLICY Register a server in
inetd.conf
Packages frequently want to register a new server in the
/etc/inetd.conf file, but Debian Policy prohibits any package from modifying a configuration file that it doesn't own. This is why the update-inetd script (in the package with the same name)
was created: It manages the configuration file, and other packages can thus use it to register a new server to the super- server's configuration.
Each significant line of the
/etc/inetd.conf file describes a server through seven fields
(separated by spaces):
The TCP or UDP port number, or the service name (which is mapped to a standard port number with the information contained in the
/etc/services file).
The socket type: stream for a TCP connection, dgram for UDP datagrams.
The protocol: tcp or udp
The options: two possible values: wait or nowait
, to tell inetd whether it should wait or not for the end of the launched process before accepting another connection. For TCP
connections, easily multiplexable, you can usually use nowait
. For programs responding over UDP, you should use nowait only if the server is capable of managing several connections in parallel. You can suffix this field with a period, followed by the maximum number of connections authorized per minute (the default limit is 256).
The user name of the user under whose identity the server will run.
The full path to the server program to execute.
The arguments: this is a complete list of the program's arguments, including its own name
(
argv[0]
in C).
The following example illustrates the most common cases:
Пример 9.1. Excerpt from
/etc/inetd.conf
talk dgram udp wait nobody.tty /usr/sbin/in.talkd in.talkd finger stream tcp nowait nobody /usr/sbin/tcpd in.fingerd ident stream tcp nowait nobody /usr/sbin/identd identd -i
The tcpd program is frequently used in the
/etc/inetd.conf file. It allows limiting incoming connections by applying access control rules, documented in the hosts_access(5) manual page,
and which are configured in the
/etc/hosts.allow and
/etc/hosts.deny files. Once it has been determined that the connection is authorized, tcpd executes the real server (like in.fingerd
in our example). It is worth noting that tcpd relies on the name under which it was invoked (that is the first argument, argv[0]
) to identify the real program to run. So you should not start the arguments list with tcpd but with the program that must be wrapped.
COMMUNITY Wietse Venema
Wietse Venema, whose expertise in security has made him a renowned programmer, is the author of the tcpd program. He is also the main creator of Postfix, the modular e-mail server (SMTP, Simple Mail Transfer Protocol), designed to be safer and more reliable than sendmail, which features a long history of security vulnerabilities.
ALTERNATIVE Other inetd commands
While Debian installs openbsd-inetd by default, there is no lack of alternatives: we can mention inetutils-inetd, micro-inetd,
rlinetd and xinetd.
This last incarnation of a super-server offers very interesting possibilities. Most notably, its configuration can be split into several files (stored, of course, in the
/etc/xinetd.d/
directory), which can make an administrator's life easier.
Last but not least, it is even possible to emulate inetd's behaviour with systemd's socket-activation mechanism (see
Раздел 9.1.1, «The systemd init system»
).

9.7. Scheduling Tasks with cron and atd
cron is the daemon responsible for executing scheduled and recurring commands (every day,
every week, etc.); atd is that which deals with commands to be executed a single time, but at a specific moment in the future.
In a Unix system, many tasks are scheduled for regular execution:
rotating the logs;
updating the database for the locate program;
back-ups;
maintenance scripts (such as cleaning out temporary files).
By default, all users can schedule the execution of tasks. Each user has thus their own crontab in which they can record scheduled commands. It can be edited by running crontab -e (its content is stored in the
/var/spool/cron/crontabs/user
file).
SECURITY Restricting cron or atd
You can restrict access to cron by creating an explicit authorization file (whitelist) in
/etc/cron.allow
, in which you indicate the only users authorized to schedule commands. All others will automatically be deprived of this feature. Conversely, to only block one or two troublemakers, you could write their username in the explicit prohibition file (blacklist),
/etc/cron.deny
. This same feature is available for atd, with the
/etc/at.allow and
/etc/at.deny files.
The root user has their own crontab, but can also use the
/etc/crontab file, or write additional
crontab files in the
/etc/cron.d directory. These last two solutions have the advantage of being able to specify the user identity to use when executing the command.
The cron package includes by default some scheduled commands that execute:
programs in the
/etc/cron.hourly/
directory once per hour;
programs in
/etc/cron.daily/
once per day;
programs in
/etc/cron.weekly/
once per week;
programs in
/etc/cron.monthly/
once per month.
Many Debian packages rely on this service: by putting maintenance scripts in these directories,
they ensure optimal operation of their services.
9.7.1. Format of a
crontab
File
TIP Text shortcuts for cron
cron recognizes some abbreviations which replace the first five fields in a crontab entry. They correspond to the most classic scheduling options:

@yearly
: once per year (January 1, at 00:00);
@monthly
: once per month (the 1st of the month, at 00:00);
@weekly
: once per week (Sunday at 00:00);
@daily
: once per day (at 00:00);
@hourly
: once per hour (at the beginning of each hour).
SPECIAL CASE cron and daylight savings time
In Debian, cron takes the time change (for Daylight Savings Time, or in fact for any significant change in the local time) into account as best as it can. Thus, the commands that should have been executed during an hour that never existed (for example,
tasks scheduled at 2:30 am during the Spring time change in France, since at 2:00 am the clock jumps directly to 3:00 am) are executed shortly after the time change (thus around 3:00 am DST). On the other hand, in autumn, when commands would be executed several times (2:30 am DST, then an hour later at 2:30 am standard time, since at 3:00 am DST the clock turns back to 2:00 am) are only executed once.
Be careful, however, if the order in which the different scheduled tasks and the delay between their respective executions matters, you should check the compatibility of these constraints with cron's behavior; if necessary, you can prepare a special schedule for the two problematic nights per year.
Each significant line of a crontab describes a scheduled command with the six (or seven)
following fields:
the value for the minute (number from 0 to 59);
the value for the hour (from 0 to 23);
the value for the day of the month (from 1 to 31);
the value for the month (from 1 to 12);
the value for the day of the week (from 0 to 7, 1 corresponding to Monday, Sunday being represented by both 0 and 7; it is also possible to use the first three letters of the name of the day of the week in English, such as
Sun
,
Mon
, etc.);
the user name under whose identity the command must be executed (in the
/etc/crontab file and in the fragments located in
/etc/cron.d/
, but not in the users' own crontab files);
the command to execute (when the conditions defined by the first five columns are met).
All these details are documented in the crontab(5) man page.
Each value can be expressed in the form of a list of possible values (separated by commas). The syntax a-b describes the interval of all the values between a
and b
. The syntax a-b/c describes the interval with an increment of c
(example:
0-10/2
means
0,2,4,6,8,10
). An asterisk
*
is a wildcard, representing all possible values.
Пример 9.2. Sample
crontab
file
#Format
#min hour day mon dow command
# Download data every night at 7:25 pm
25 19 * * * $HOME/bin/get.pl
# 8:00 am, on weekdays (Monday through Friday)
00 08 * * 1-5 $HOME/bin/dosomething

# Restart the IRC proxy after each reboot
@reboot /usr/bin/dircproxy
TIP Executing a command on boot
To execute a command a single time, just after booting the computer, you can use the
@reboot macro (a simple restart of
cron does not trigger a command scheduled with
@reboot
). This macro replaces the first five fields of an entry in the
crontab.
ALTERNATIVE Emulating cron with systemd
It is possible to emulate part of cron's behaviour with systemd's timer mechanism (see
Раздел 9.1.1, «The systemd init system»
).
9.7.2. Using the at Command
The at executes a command at a specified moment in the future. It takes the desired time and date as command-line parameters, and the command to be executed in its standard input. The command will be executed as if it had been entered in the current shell. at even takes care to retain the current environment, in order to reproduce the same conditions when it executes the command. The time is indicated by following the usual conventions:
16:12
or
4:12pm represents 4:12 pm. The date can be specified in several European and Western formats,
including
DD.MM.YY
(
27.07.15
thus representing 27 July 2015),
YYYY-MM-DD
(this same date being expressed as
2015-07-27
),
MM/DD/[CC]YY
(ie.,
12/25/15
or
12/25/2015
will be
December 25, 2015), or simple
MMDD[CC]YY
(so that
122515
or
12252015
will, likewise,
represent December 25, 2015). Without it, the command will be executed as soon as the clock reaches the time indicated (the same day, or tomorrow if that time has already passed on the same day). You can also simply write “today” or “tomorrow”, which is self-explanatory.
$
at 09:00 27.07.15 <
>
echo "Don't forget to wish a Happy Birthday to Raphaël!" \
>
| mail lolando@debian.org
>
END
warning: commands will be executed using /bin/sh job 31 at Mon Jul 27 09:00:00 2015
An alternative syntax postpones the execution for a given duration: at now +
number

period
The
period
can be minutes
, hours
, days
, or weeks
. The
number
simply indicates the number of said units that must elapse before execution of the command.
To cancel a task scheduled by cron, simply run crontab -e and delete the corresponding line in the crontab file. For at tasks, it is almost as easy: run atrm
task-number
. The task number is indicated by the at command when you scheduled it, but you can find it again with the atq
command, which gives the current list of scheduled tasks.

9.8. Scheduling Asynchronous Tasks:
anacron
anacron is the daemon that completes cron for computers that are not on at all times. Since regular tasks are usually scheduled for the middle of the night, they will never be executed if the computer is off at that time. The purpose of anacron is to execute them, taking into account periods in which the computer is not working.
Please note that anacron will frequently execute such activity a few minutes after booting the machine, which can render the computer less responsive. This is why the tasks in the
/etc/anacrontab file are started with the nice command, which reduces their execution priority and thus limits their impact on the rest of the system. Beware, the format of this file is not the same as that of
/etc/crontab
; if you have particular needs for anacron, see the anacrontab(5) manual page.
BACK TO BASICS Priorities and nice
Unix systems (and thus Linux) are multi-tasking and multi-user systems. Indeed, several processes can run in parallel, and be owned by different users: the kernel mediates access to the resources between the different processes. As a part of this task,
it has a concept of priority, which allows it to favor certain processes over others, as needed. When you know that a process can run in low priority, you can indicate so by running it with nice
program
. The program will then have a smaller share of the
CPU, and will have a smaller impact on other running processes. Of course, if no other processes needs to run, the program will not be artificially held back.
nice works with levels of “niceness”: the positive levels (from 1 to 19) progressively lower the priority, while the negative levels (from -1 to -20) will increase it — but only root can use these negative levels. Unless otherwise indicated (see the nice(1) manual page), nice increases the current level by 10.
If you discover that an already running task should have been started with nice it is not too late to fix it; the renice command changes the priority of an already running process, in either direction (but reducing the “niceness” of a process is reserved for the root user).
Installation of the anacron package deactivates execution by cron of the scripts in the
/etc/cron.hourly/
,
/etc/cron.daily/
,
/etc/cron.weekly/
, and
/etc/cron.monthly/
directories. This avoids their double execution by anacron and cron. The cron command remains active and will continue to handle the other scheduled tasks (especially those scheduled by users).

9.9. Quotas
The quota system allows limiting disk space allocated to a user or group of users. To set it up,
you must have a kernel that supports it (compiled with the
CONFIG_QUOTA
option) — as is the case with Debian kernels. The quota management software is found in the quota Debian package.
To activate quota in a filesystem, you have to indicate the usrquota and grpquota options in
/etc/fstab for the user and group quotas, respectively. Rebooting the computer will then update the quotas in the absence of disk activity (a necessary condition for proper accounting of already used disk space).
The edquota
user
(or edquota -g
group
) command allows you to change the limits while examining current disk space usage.
GOING FURTHER Defining quotas with a script
The setquota program can be used in a script to automatically change many quotas. Its setquota(8) manual page details the syntax to use.
The quota system allows you to set four limits:
two limits (called “soft” and “hard”) refer to the number of blocks consumed. If the filesystem was created with a block-size of 1 kibibyte, a block contains 1024 bytes from the same file. Unsaturated blocks thus induce losses of disk space. A quota of 100 blocks,
which theoretically allows storage of 102,400 bytes, will however be saturated with just
100 files of 500 bytes each, only representing 50,000 bytes in total.
two limits (soft and hard) refer to the number of inodes used. Each file occupies at least one inode to store information about it (permissions, owner, timestamp of last access, etc.).
It is thus a limit on the number of user files.
A “soft” limit can be temporarily exceeded; the user will simply be warned that they are exceeding the quota by the warnquota command, which is usually invoked by cron. A “hard”
limit can never be exceeded: the system will refuse any operation that will cause a hard quota to be exceeded.
VOCABULARY Blocks and inodes
The filesystem divides the hard drive into blocks — small contiguous areas. The size of these blocks is defined during creation of the filesystem, and generally varies between 1 and 8 kibibytes.
A block can be used either to store the real data of a file, or for meta-data used by the filesystem. Among this meta-data, you will especially find the inodes. An inode uses a block on the hard drive (but this block is not taken into consideration in the block quota, only in the inode quota), and contains both the information on the file to which it corresponds (name, owner,
permissions, etc.) and the pointers to the data blocks that are actually used. For very large files that occupy more blocks than it is possible to reference in a single inode, there is an indirect block system; the inode references a list of blocks that do not directly contain data, but another list of blocks.

With the edquota -t command, you can define a maximum authorized “grace period” within which a soft limit may be exceeded. After this period, the soft limit will be treated like a hard limit, and the user will have to reduce their disk space usage to within this limit in order to be able to write anything to the hard drive.
GOING FURTHER Setting up a default quota for new users
To automatically setup a quota for new users, you have to configure a template user (with edquota or setquota) and indicate their user name in the
QUOTAUSER
variable in the
/etc/adduser.conf file. This quota configuration will then be automatically applied to each new user created with the adduser command.

9.10. Backup
Making backups is one of the main responsibilities of any administrator, but it is a complex subject, involving powerful tools which are often difficult to master.
Many programs exist, such as amanda, bacula, BackupPC. Those are client/server system featuring many options, whose configuration is rather difficult. Some of them provide user- friendly web interfaces to mitigate this. But Debian contains dozens of other backup software covering all possible use cases, as you can easily confirm with apt-cache search backup.
Rather than detailing some of them, this section will present the thoughts of the Falcot Corp administrators when they defined their backup strategy.
At Falcot Corp, backups have two goals: recovering erroneously deleted files, and quickly restoring any computer (server or desktop) whose hard drive has failed.
9.10.1. Backing Up with rsync
Backups on tape having been deemed too slow and costly, data will be backed up on hard drives on a dedicated server, on which the use of software RAID (see
Раздел 12.1.1, «Программный
RAID»
) will protect the data from hard drive failure. Desktop computers are not backed up individually, but users are advised that their personal account on their department's file server will be backed up. The rsync command (from the package of the same name) is used daily to back up these different servers.
BACK TO BASICS The hard link, a second name for the file
A hard link, as opposed to a symbolic link, cannot be differentiated from the linked file. Creating a hard link is essentially the same as giving an existing file a second name. This is why the deletion of a hard link only removes one of the names associated with the file. As long as another name is still assigned to the file, the data therein remain present on the filesystem.
It is interesting to note that, unlike a copy, the hard link does not take up additional space on the hard drive.
A hard link is created with the ln
target

link
command. The
link
file is then a new name for the
target
file. Hard links can only be created on the same filesystem, while symbolic links are not subject to this limitation.
The available hard drive space prohibits implementation of a complete daily backup. As such,
the rsync command is preceded by a duplication of the content of the previous backup with hard links, which prevents usage of too much hard drive space. The rsync process then only replaces files that have been modified since the last backup. With this mechanism a great number of backups can be kept in a small amount of space. Since all backups are immediately available and accessible (for example, in different directories of a given share on the network), you can quickly make comparisons between two given dates.
This backup mechanism is easily implemented with the dirvish program. It uses a backup storage space (“bank” in its vocabulary) in which it places timestamped copies of sets of backup files

(these sets are called “vaults” in the dirvish documentation).
The main configuration is in the
/etc/dirvish/master.conf file. It defines the location of the backup storage space, the list of “vaults” to manage, and default values for expiration of the backups. The rest of the configuration is located in the
bank/vault/dirvish/default.conf files and contains the specific configuration for the corresponding set of files.
Пример 9.3. The
/etc/dirvish/master.conf
file
bank:
/backup exclude:
lost+found/
core
*
Runall:
root 22:00
expire-default: +15 days expire-rule:
# MIN HR DOM MON DOW STRFTIME_FMT
* * * * 1 +3 months
* * 1-7 * 1 +1 year
* * 1-7 1,4,7,10 1
The bank setting indicates the directory in which the backups are stored. The exclude setting allows you to indicate files (or file types) to exclude from the backup. The
Runall is a list of file sets to backup with a time-stamp for each set, which allows you to assign the correct date to the copy, in case the backup is not triggered at precisely the assigned time. You have to indicate a time just before the actual execution time (which is, by default, 10:04 pm in Debian, according to
/etc/cron.d/dirvish
). Finally, the expire-default and expire-rule settings define the expiration policy for backups. The above example keeps forever backups that are generated on the first Sunday of each quarter, deletes after one year those from the first Sunday of each month,
and after 3 months those from other Sundays. Other daily backups are kept for 15 days. The order of the rules does matter, Dirvish uses the last matching rule, or the expire-default one if no other expire-rule matches.
IN PRACTICE Scheduled expiration
The expiration rules are not used by dirvish-expire to do its job. In reality, the expiration rules are applied when creating a new backup copy to define the expiration date associated with that copy. dirvish-expire simply peruses the stored copies and deletes those for which the expiration date has passed.
Пример 9.4. The
/backup/root/dirvish/default.conf
file
client: rivendell.falcot.com tree: /
xdev: 1
index: gzip image-default: %Y%m%d exclude:
/var/cache/apt/archives/*.deb
/var/cache/man/**
/tmp/**

/var/tmp/**
*.bak
The above example specifies the set of files to back up: these are files on the machine
rivendell.falcot.com (for local data backup, simply specify the name of the local machine as indicated by hostname), especially those in the root tree (
tree: /
), except those listed in exclude
. The backup will be limited to the contents of one filesystem (
xdev: 1
). It will not include files from other mount points. An index of saved files will be generated (
index: gzip
),
and the image will be named according to the current date (
image-default: %Y%m%d
).
There are many options available, all documented in the dirvish.conf(5) manual page. Once these configuration files are setup, you have to initialize each file set with the dirvish --vault
vault
--init command. From there on the daily invocation of dirvish-runall will automatically create a new backup copy just after having deleted those that expired.
IN PRACTICE Remote backup over SSH
When dirvish needs to save data to a remote machine, it will use ssh to connect to it, and will start rsync as a server. This requires the root user to be able to automatically connect to it. The use of an SSH authentication key allows precisely that
(see
Раздел 9.2.1.1, «Key-Based Authentication»
).
9.10.2. Restoring Machines without Backups
Desktop computers, which are not backed up, will be easy to reinstall from custom DVD-ROMs prepared with Simple-CDD (see
Раздел 12.3.3, «Simple-CDD: решение «всё-в-одном»»
).
Since this performs an installation from scratch, it loses any customization that can have been made after the initial installation. This is fine since the systems are all hooked to a central LDAP
directory for accounts and most desktop applications are preconfigured thanks to dconf (see
Раздел 13.3.1, «GNOME»
for more information about this).
The Falcot Corp administrators are aware of the limits in their backup policy. Since they can't protect the backup server as well as a tape in a fireproof safe, they have installed it in a separate room so that a disaster such as a fire in the server room won't destroy backups along with everything else. Furthermore, they do an incremental backup on DVD-ROM once per week —
only files that have been modified since the last backup are included.
GOING FURTHER Backing up SQL and LDAP services
Many services (such as SQL or LDAP databases) cannot be backed up by simply copying their files (unless they are properly interrupted during creation of the backups, which is frequently problematic, since they are intended to be available at all times). As such, it is necessary to use an “export” mechanism to create a “data dump” that can be safely backed up. These are often quite large, but they compress well. To reduce the storage space required, you will only store a complete text file per week, and a diff each day, which is created with a command of the type diff
file_from_yesterday

file_from_today
. The
xdelta program produces incremental differences from binary dumps.
CULTURE TAR, the standard for tape backups
Historically, the simplest means of making a backup on Unix was to store a TAR archive on a tape. The tar command even got its name from “Tape ARchive”.

9.11. Hot Plugging: hotplug
9.11.1. Introduction
The hotplug kernel subsystem dynamically handles the addition and removal of devices, by loading the appropriate drivers and by creating the corresponding device files (with the help of
udevd). With modern hardware and virtualization, almost everything can be hotplugged: from the usual USB/PCMCIA/IEEE 1394 peripherals to SATA hard drives, but also the CPU and the memory.
The kernel has a database that associates each device ID with the required driver. This database is used during boot to load all the drivers for the peripheral devices detected on the different buses, but also when an additional hotplug device is connected. Once the device is ready for use, a message is sent to udevd so it will be able to create the corresponding entry in
/dev/
9.11.2. The Naming Problem
Before the appearance of hotplug connections, it was easy to assign a fixed name to a device. It was based simply on the position of the devices on their respective bus. But this is not possible when such devices can come and go on the bus. The typical case is the use of a digital camera and a USB key, both of which appear to the computer as disk drives. The first one connected may be
/dev/sdb and the second
/dev/sdc
(with
/dev/sda representing the computer's own hard drive). The device name is not fixed; it depends on the order in which devices are connected.
Additionally, more and more drivers use dynamic values for devices' major/minor numbers,
which makes it impossible to have static entries for the given devices, since these essential characteristics may vary after a reboot.
udev was created precisely to solve this problem.
IN PRACTICE Network card management
Many computers have multiple network cards (sometimes two wired interfaces and a wifi interface), and with hotplug
support on most bus types, the Linux kernel does not guarantee fixed naming of network interfaces. But users who want to configure their network in
/etc/network/interfaces need a fixed name!
It would be difficult to ask every user to create their own udev rules to address this problem. This is why udev was configured in a rather peculiar manner; on first boot (and, more generally, each time that a new network card appears) it uses the name of the network interface and its MAC address to create new rules that will reassign the same name on subsequent boots. These rules are stored in
/etc/udev/rules.d/70-persistent-net.rules
This mechanism has some side effects that you should know about. Let's consider the case of a computer that has only one
PCI network card. The network interface is named eth0
, logically. Now say the card breaks down, and the administrator replaces it; the new card will have a new MAC address. Since the old card was assigned the name, eth0
, the new one will be assigned eth1
, even though the eth0
card is gone for good (and the network will not be functional because

/etc/network/interfaces likely configures an eth0
interface). In this case, it is enough to simply delete the
/etc/udev/rules.d/70-persistent-net.rules file before rebooting the computer. The new card will then be given the expected eth0
name.
9.11.3. How udev Works
When udev is notified by the kernel of the appearance of a new device, it collects various information on the given device by consulting the corresponding entries in
/sys/
, especially those that uniquely identify it (MAC address for a network card, serial number for some USB
devices, etc.).
Armed with all of this information, udev then consults all of the rules contained in
/etc/udev/rules.d/
and
/lib/udev/rules.d/
. In this process it decides how to name the device, what symbolic links to create (to give it alternative names), and what commands to execute. All of these files are consulted, and the rules are all evaluated sequentially (except when a file uses “GOTO” directives). Thus, there may be several rules that correspond to a given event.
The syntax of rules files is quite simple: each row contains selection criteria and variable assignments. The former are used to select events for which there is a need to react, and the latter defines the action to take. They are all simply separated with commas, and the operator implicitly differentiates between a selection criterion (with comparison operators, such as
==
or
!=
) or an assignment directive (with operators such as
=
,
+=
or
:=
).
Comparison operators are used on the following variables:
KERNEL
: the name that the kernel assigns to the device;
ACTION
: the action corresponding to the event (“add” when a device has been added,
“remove” when it has been removed);
DEVPATH
: the path of the device's
/sys/
entry;
SUBSYSTEM
: the kernel subsystem which generated the request (there are many, but a few examples are “usb”, “ide”, “net”, “firmware”, etc.);
ATTR{attribute}
: file contents of the
attribute
file in the
/sys/$devpath/
directory of the device. This is where you find the MAC address and other bus specific identifiers;
KERNELS
,
SUBSYSTEMS
and
ATTRS{attributes}
are variations that will try to match the different options on one of the parent devices of the current device;
PROGRAM
: delegates the test to the indicated program (true if it returns 0, false if not). The content of the program's standard output is stored so that it can be reused by the
RESULT
test;
RESULT
: execute tests on the standard output stored during the last call to
PROGRAM
The right operands can use pattern expressions to match several values at the same time. For instance,
*
matches any string (even an empty one);
?
matches any character, and
[]
matches the set of characters listed between the square brackets (or the opposite thereof if the first character is an exclamation point, and contiguous ranges of characters are indicated like a-z
).

Regarding the assignment operators,
=
assigns a value (and replaces the current value); in the case of a list, it is emptied and contains only the value assigned.
:=
does the same, but prevents later changes to the same variable. As for
+=
, it adds an item to a list. The following variables can be changed:
NAME
: the device filename to be created in
/dev/
. Only the first assignment counts; the others are ignored;
SYMLINK
: the list of symbolic links that will point to the same device;
OWNER
,
GROUP
and
MODE
define the user and group that owns the device, as well as the associated permission;
RUN
: the list of programs to execute in response to this event.
The values assigned to these variables may use a number of substitutions:
$kernel or
%k
: equivalent to
KERNEL
;
$number or
%n
: the order number of the device, for example, for sda3
, it would be “3”;
$devpath or
%p
: equivalent to
DEVPATH
;
$attr{attribute}
or
%s{attribute}
: equivalent to
ATTRS{attribute}
;
$major or
%M
: the kernel major number of the device;
$minor or
%m
: the kernel minor number of the device;
$result or
%c
: the string output by the last program invoked by
PROGRAM
;
and, finally,
%%
and
$$
for the percent and dollar sign, respectively.
The above lists are not complete (they include only the most important parameters), but the udev(7) manual page should be exhaustive.
9.11.4. A concrete example
Let us consider the case of a simple USB key and try to assign it a fixed name. First, you must find the elements that will identify it in a unique manner. For this, plug it in and run udevadm
info -a -n /dev/sdc (replacing
/dev/sdc
with the actual name assigned to the key).
#
udevadm info -a -n /dev/sdc
[...]
looking at device '/devices/pci0000:00/0000:00:10.3/usb1/1-2/1-2.2/1-2.2:1.0/host9/target9:0:0/9:0:0:0/block/sdc':
KERNEL=="sdc"
SUBSYSTEM=="block"
DRIVER==""
ATTR{range}=="16"
ATTR{ext_range}=="256"
ATTR{removable}=="1"
ATTR{ro}=="0"
ATTR{size}=="126976"
ATTR{alignment_offset}=="0"
ATTR{capability}=="53"
ATTR{stat}==" 51 100 1208 256 0 0 0 0 0 192 25 6"
ATTR{inflight}==" 0 0"

[...]
looking at parent device '/devices/pci0000:00/0000:00:10.3/usb1/1-2/1-2.2/1-2.2:1.0/host9/target9:0:0/9:0:0:0':
KERNELS=="9:0:0:0"
SUBSYSTEMS=="scsi"
DRIVERS=="sd"
ATTRS{device_blocked}=="0"
ATTRS{type}=="0"
ATTRS{scsi_level}=="3"
ATTRS{vendor}=="I0MEGA "
ATTRS{model}=="UMni64MB*IOM2C4 "
ATTRS{rev}==" "
ATTRS{state}=="running"
[...]
ATTRS{max_sectors}=="240"
[...]
looking at parent device '/devices/pci0000:00/0000:00:10.3/usb1/1-2/1-2.2':
KERNELS=="9:0:0:0"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{configuration}=="iCfg"
ATTRS{bNumInterfaces}==" 1"
ATTRS{bConfigurationValue}=="1"
ATTRS{bmAttributes}=="80"
ATTRS{bMaxPower}=="100mA"
ATTRS{urbnum}=="398"
ATTRS{idVendor}=="4146"
ATTRS{idProduct}=="4146"
ATTRS{bcdDevice}=="0100"
[...]
ATTRS{manufacturer}=="USB Disk"
ATTRS{product}=="USB Mass Storage Device"
ATTRS{serial}=="M004021000001"
[...]
To create a new rule, you can use tests on the device's variables, as well as those of one of the parent devices. The above case allows us to create two rules like these:
KERNEL=="sd?", SUBSYSTEM=="block", ATTRS{serial}=="M004021000001", SYMLINK+="usb_key/disk"
KERNEL=="sd?[0-9]", SUBSYSTEM=="block", ATTRS{serial}=="M004021000001", SYMLINK+="usb_key/part%n"
Once these rules are set in a file, named for example
/etc/udev/rules.d/010_local.rules
,
you can simply remove and reconnect the USB key. You can then see that
/dev/usb_key/disk represents the disk associated with the USB key, and
/dev/usb_key/part1
is its first partition.
GOING FURTHER Debugging udev's configuration
Like many daemons, udevd stores logs in
/var/log/daemon.log
. But it is not very verbose by default, and it is usually not enough to understand what is happening. The udevadm control --log-priority=info command increases the verbosity level and solves this problem. udevadm control --log-priority=err returns to the default verbosity level.

9.12. Power Management: Advanced
Configuration and Power Interface (ACPI)
The topic of power management is often problematic. Indeed, properly suspending the computer requires that all the computer's device drivers know how to put them to standby, and that they properly reconfigure the devices upon waking. Unfortunately, there are still a few devices unable to sleep well under Linux, because their manufacturers have not provided the required specifications.
Linux supports ACPI (Advanced Configuration and Power Interface) — the most recent standard in power management. The acpid package provides a daemon that looks for power management related events (switching between AC and battery power on a laptop, etc.) and that can execute various commands in response.
BEWARE Graphics card and standby
The graphics card driver is often the culprit when standby doesn't work properly. In that case, it is a good idea to test the latest version of the X.org graphics server.
After this overview of basic services common to many Unix systems, we will focus on the environment of the administered machines: the network. Many services are required for the network to work properly. They will be discussed in the next chapter.
1   ...   12   13   14   15   16   17   18   19   ...   32


База данных защищена авторским правом ©nethash.ru 2019
обратиться к администрации

войти | регистрация
    Главная страница


загрузить материал