The FAQ is divided in three documents. The General FAQ has links to all questions and answers. The LFS FAQ is a selection of LFS-specific FAQ's and the BLFS FAQ is a selection of BLFS-specific FAQ's.
Marc Heerdink may have said it best in a post to lfs-dev:
The problem is that the FAQ is a dynamic document. The FAQ for a book release is released only after the book version itself, because the FAQ is updated to reflect the Qs asked about the current version of the book. A link is better, since you'll always have the most up-to-date answers handy.
This is fairly well discussed in the thread starting at this post.
Package management - beyond that provided by tarballs and makefiles - is beyond the scope of the book. If nothing else does, the number of different "solutions" should hint at some of the reasons.
Here are a few of the options:
If you have an addition to the list, please do email its id, URL, and other information, to the FAQ maintainer or an appropriate LFS mailing list so it can be added here.
Power Management is a kernel function, you need to enable it in
the kernel. In the 5.11 kernel, you have to enable the options for
ACPI (Advanced Configuration and Power Interface
under Power managerment and ACPI options
.
For very old 32-bit x86 machines, you'll probably want the APM
options, newer machines often require ACPI. Make sure that either APM
or ACPI be enabled in the kernel, but definitely not
both at the same time - this has been known to cause problems such as
neither actually taking effect. Also try disabling SMP if you only
have one processor; it's also known to prevent a proper poweroff.
Make sure you read the help with each option.
After rebooting into the new kernel you should be able to poweroff
your machine with the command shutdown -h now
or
poweroff
(also read man shutdown
and
man halt
). If you compiled APM or ACPI as modules, make
sure they are loaded before you try to power off. Some machines
require that APM or ACPI is compiled into the kernel because it needs
to be initialised at boottime.
GRUB can be built for UEFI, but doing so needs several packages beyond the scope of LFS. You can consult the BLFS page for it.
Short answer: no.
Long answer: we want LFS to be "settled down": if some package
is rebuilt in the LFS system, the result (libraries and binaries)
should be same as the result at the end of the LFS book. In
Chapter 6 the tools are cross compiled, where many tests in
the configure
script can't be done. The "guessed"
result will be used, unnecessary workarounds are enabled, or
optional features are disabled. The tools in Chapter 7 are built
for resolving circular dependencies: many of their optional
features depend on packages not built yet and have to be
disabled. So rebuilding them in Chapter 8 is necessary.
On the other hand, if you are building Linux for some really tiny platform where you can't build packages in a reasonable time (for example, a 16 MHz ARM), you can cross compile everything in Chapter 6 since Chapter 7 and 8 are not applicable. Or, you can do Chapter 7 and 8 with an emulator like QEMU.
Short answer: no.
Long answer: LSB mandates the ELF loader to be at /lib64/ld-linux-x86-64.so.2 on x86-64.
Even longer answer: when the kernel is told to execute a dynamic linked ELF executable, it reads the path to the ELF loader hard-coded in the ELF executable, which is /lib64/ld-linux-x86-64.so.2. So, if it does not exist, LFS won't be able to run any dynamic linked executable compiled elsewhere. For example, executables from commercial software packages (MATLAB or COMSOL) or binaries downloaded from GitHub release page will not run.
Most relatively recent distributions should be fine. You could consult the Host System Requirements page.
Make sure you have installed and/or updated the development packages. (Look for ones starting in "gcc", "glibc", or "libstdc++" or ending in "-dev" or "-devel".)
If you want to use LFS as your main system and you wish to install it without first installing a distribution, It is also possible to use a live image on dvd or usb stick. All major distributions provide one.
In /usr/share/doc/linux-x.y.z or wherever you unpacked your kernel source and the help in kernel config tool (make menuconfig), see the Module-HOWTO at https://www.tldp.org/HOWTO/Module-HOWTO/.
Short answer: no.
Long answer: probably, but only to someone working on the package you're trying to compile. Mostly, everything will be fine unless make quits with an error.
Here's an example:
sk ~/tmp $ cat > Makefile
main:
gcc main.c
sk ~/tmp $ cat > main.c
void main() { exit(0); }
sk ~/tmp $ make
gcc main.c
main.c: In function `main':
main.c:1: warning: return type of `main' is not `int'
sk ~/tmp $ ######## that worked ########
sk ~/tmp $
sk ~/tmp $ cat > main.c
int main() { exxit(0) }
sk ~/tmp $ make
gcc main.c
main.c: In function `main':
main.c:1: parse error before `}'
make: *** [main] Error 1
sk ~/tmp $ ######## that failed ########
sk ~/tmp $
If you can determine that some warning indicates a real bug in the software, report to the maintainer of it.
For information about building LFS for a wide array of systems, take a look at the Cross-LFS branch of LFS.
For ARM, consult LFS Fork for ARM (64-bit or 32-bit) (SysV and Systemd), maintained by William Harrington; and another LFS ARM64 Branch (64-bit only, Systemd and SysV [NOT TESTED!]), maintained by Xi Ruoyao.
It's often useful to compile LFS for one machine on another machine. Say using that fast 1Ghz Athlon to build an install for an old 486. While this is technically not cross compiling, binaries compiled for the Athlon cannot be run on the 486 because binaries compiled for the newer processor use features the older processor doesn't have.
The LFS book specifically for cross compiling is the Cross-LFS book. Another source of information would be the cross-compiling hint.
The resources provided above are quite outdated. You can modify the LFS building process after LFS 10.0 (developed by Pierre Labestie) to cross compile LFS: set $LFS_TGT to the triplet of your target platform, and build cross toolchain in Chapter 5 and temporary tools in Chapter 6 as normal. At the end of Chapter 6, build a kernel and a bootloader for the target. Then copy the temporary system to the target platform, boot it, and continue the building from Chapter 7. Read clfs-ng branch (SysV and Systemd) for details.
It has to do with the characters used to end lines.
There are two that may be used:
Unix, DOS, and MacOS each use a different combination to end lines in text files:
To change DOS to Unix, use
cp <fileid> <fileid>.dos &&
cat <fileid>.dos | tr -d '\r' > <fileid>
Or in vim
, you can convert a file with :set
ff={unix, dos, mac}
. Other conversions will probably require
sed or a different use of tr and are left as an exercise for the
reader.
Yes. You can download the file LFS-BOOK-x.y-wget-list
https://www.linuxfromscratch.org/lfs/downloads/stable/wget-list. To download all the files,
use the version of wget
on your host distribution to run:
wget --input-file=LFS-BOOK-x.y-wget-list
If you're getting errors and you're setting CFLAGS or otherwise passing optimization flags to the compiler that may be the problem.
If you ask on the list and they can't figure it out immediately, they'll likely suggest trying it without optimization. So if you just retry it without before asking, you'll be one step ahead of them :)
Of particular note is that optimizing binutils, gcc, or glibc may cause any other package to fail to compile or run or to otherwise misbehave in strange and mysterious ways. Also, optimization that works for someone else may not work for you. Flags that used to work may mysteriously stop working. Even some small innocent hardware change can make the difference.
(If you don't know what optimization flags are, don't worry, you really don't need to.)
To determine what is present on the system, the configure scripts try various commands with various command line options. They then take actions depending on the exit code of the commands. Some of those commands may write error messages, and this is what you see, for example with "gcc -V". But the configure script itself has not failed.
You over optimized gcc.
Yes. In general make clean
or make
dist-clean
can't be relied upon for clean sources. Especially
when you have manually hacked the sources or applied patches to it
you should first try again with a fresh unpacked package. The only
exception to this rule is the linux kernel, which requires its
sources to be present when third-party modules, such as the NVidia
drivers, are needed.
Does /dev/null look like this:
$ ls -l /dev/null
crw-rw-rw- 1 root root 1, 3 Aug 3 2000 /dev/null
If not, it should. Refer to "configure: loading cache /dev/null" in config.log.
If it does look right, the problem is probably your mount options. See the answer to "./configure: bad interpreter: Permission denied", above.
The long answer is at https://www.bitwizard.nl/sig11/.
The short answer is that if restarting make gets a little further every time, you have a hardware problem. (If make, or whatever you're running, fails at the same place every time, then it is not hardware.)
First note that the CPU may be broken itself. Especially the 13th and 14th generation Intel Core processors have a notorious stability issue. Update the BIOS for the latest microcode (revision 0x129 or later) and the "Intel Recommended Profile" switch. If you've already updated the BIOS and enabled this switch, and ruled out the other possibilities causing the Segmentation Fault but it still occurs, the CPU is likely already permnantly damaged. Contact Intel or the OEM for a replacement (or if you are so unlucky that the OEM has already disappeared you'll have to buy a new processor).
Do not overclock the CPU too much. When you make an overclock setting, you must stress test the system adequately to make sure it's stable under different type of workloads (single-core and multi-core, non-AVX and AVX, etc.). Note that with an Intel "K" (unlocked) CPU, even the default configuration of the motherboard is often already overclocking so you may need to even "step back" from the default especially when the CPU is below the average: the individual difference among all CPUs of a same model can make a significant effect on the overclocking. For the 13th and 14th generation Core processors use the "Intel Recommended Profile" switch to disable such a "default overclocking setting" as mentioned above.
Assuming you're not overclocking, the most likely hardware problem is bad memory that you can check with Memtest86+ from https://www.memtest.org/.
CPU overheating is another common hardware problem. Ensure the cooler is properly installed with thermal paste applied. And, some coolers (especially all-in-one liquid coolers) cannot be configured via BIOS and they need kernel driver and/or special software (for example, liquidctl) to set the parameters correctly. If such a cooler is not properly configured, it can run in a lower speed (or does not run at all). If the cooler is already running at full speed but the CPU still overheats, either upgrade the cooler, or downclock the CPU (setting a hard limit of the frequency, or decrease the power/temporary limit where the CPU will start to downclock itself) via the BIOS.
If both bad memory and CPU overheating can be ruled out, see the long answer.
Example of this error is:
/usr/bin/env: /bin/bash: No such file or directory
If you are sure $LFS/bin/bash exists,
what happens is likely that the path to the dynamic linker path
embedded inside the executable is /lib64/ld-linux-x86-64.so.2
(/lib/ld-linux.so.2 for 32-bit), and
when one goes to run the binary inside the chroot where
/lib64/ld-linux-x86-64.so.2 does not exist yet, the very unhelpful
No such file or directory
error message is shown.
Check if the symlink $LFS/lib64/ld-linux-x86-64.so.2 (it should target ../lib/ld-linux-x86-64.so.2, or ../lib/ld-linux.so.2 for 32-bit) and/or $LFS/lib (it should target usr/lib) are broken. Note that these symlinks must be relative (i.e. it should be ../lib/ld-linux-x86-64.so.2, not $LFS/lib/ld-linux-x86-64-so.2) so they are still vaild in the chroot environment.
You forgot to cd
into the extracted directory of the package after you've extracted it.
You're most likely getting this while building binutils in Chapter 5 of the LFS Book. The problem is most likely your mount options. You probably have a line in /etc/fstab like:
/dev/sda10 /mnt/lfs ext2 user 1 2
'user' is the mount flag, and it's the problem. To quote from the mount man page:
user: Allow an ordinary user to mount the file system. This option implies the options noexec, nosuid, and nodev (unless overridden by subsequent options, as in the option line user,exec,dev,suid).So change the line in /etc/fstab like this:
/dev/sda10 /mnt/lfs ext2 defaults 1 2
Typical symptoms look like this:
sk ~/tmp-0.0 $ ./configure
creating cache ./config.cache
checking host system type...
configure: error: can not guess host type; you must specify one
sk ~/tmp-0.0 $
The problem is usually that the script can't run the compiler. Usually it's just a missing /usr/bin/cc symlink. You can fix it like this:
cd /usr/bin && ln -s gcc cc
If that doesn't do it, check the file config.log created by configure. Errors are recorded there and may indicate the problem.
If you're getting an error from configure like:
checking whether we are using GNU C... no
configure: error: GNU libc must be compiled using GNU CC
It may be because grep isn't working. To test if grep is working in the chroot environment, run the following command from inside chroot:
grep -E root /etc/passwd
If it doesn't print root's line from /etc/passwd, again, you have a problem. (This test also works if you encounter the problem after rebooting into the new LFS system.)
If it happens in the LFS chroot environment, ensure your host kernel supports UNIX 98 pseudo terminal (all not-so-old desktop or server distros should support it), and the virtual kernel file systems have been mounted correctly before entering the chroot environment.
If it happens in the complete system built following the SysV
revision of the LFS book, it's likely you've missed the line for
the devpts
filesystem in /etc/fstab
.
Check if config.log
contains "configure: loading
cache /dev/null". If it's the case, refer to
the entry for it.
If it happens in the LFS chroot environment, it's likely you've
forgotten to bind mount /dev
to $LFS/dev
in
"Preparing Virtual Kernel File Systems".
Exit from the chroot environment first. Then run
ls -l /dev/null
. It should output something like
crw-rw-rw- 1 root root 1, 3 {some date} /dev/null
.
Especially, the first letter c
and the numbers
1, 3
must be correct.
If not, it means your host distro is somehow broken (it may happen
if you used the dangerous rm -rf $LFS/*
command or
similar when /dev
had been bind mounted). For a modern host
distro it can be fixed by rebooting (a broken /dev
may
prevent normal rebooting and you may need to use the reset button).
For a very old host distro, you may need to reinstall it (so why not
update to a modern one? :)
Now we've known the host distro is sane. Make sure
$LFS
is correctly set and the LFS partition is mounted
first. Use umount -R $LFS/dev
to unmount
$LFS/dev
(in case you've mounted something wrong there),
then remove everything in $LFS/dev
and follow the
"Preparing Virtual Kernel File Systems" section to mount
$LFS/dev
and $LFS/dev/pts
correctly. Once
they are mounted, you can reenter the chroot environment and
continue.
This error message usually indicates that limits.h
provided by GCC isn't including limits.h
from Glibc
as it should be. There is one command as a workaround for
limits.h
in GCC Pass 1. Do not forget to run the
command.
In LFS 10.0 through 11.3, there is another command as the
workaround running mkheaders
after installing Glibc
(Chapter 5). This command has been removed in LFS 12.0. Either
running this command building LFS 12.0 or later (likely because of
a reuse of old scripts - note that such a reuse is strongly
discouraged) or forgetting this command building LFS 10.0 through
11.3 will also lead to this error message.
If you've encountered this issue, untar the GCC tarball again
and run the command at the bottom of GCC Pass 1 page to create
limits.h
. Then if you are building LFS 12.0 or
later, run rm -f
$LFS/tools/lib/gcc/$LFS_TGT/*/include-fixed/limits.h
which
would fix the issue in case you've mistakenly run the
mkheaders
command which does not belong to the LFS
version you are building. If you are building LFS 11.0 through
11.3, run the mkheaders
command in Chapter 5
Glibc.
It's likely because /etc/passwd
for sysv revision
is misused in systemd-based system. "No such process" is just the
"standard" error message for ESRCH
, it's not very
helpful for diagnosis of this issue.
There are several reasons why the kernel might be unable to mount the root filesystem.
/boot/grub/grub.cfg
?When you see, in your syslogs, this line:
init: Id "1" respawning too fast: disabled for 5 minutes
It means you have an error in the /etc/inittab line beginning with the given id ("1" in this example).
The full error looks like this:
eth0:unknown interface:No such device [failed]
Setting up default gateway...
SIOCADDRT:No such device [failed]
eth0 is a virtual device with no /dev entry. It refers to the first detected network card in your system. The reason the kernel can't find this device is because you forgot to add support for your network card in the kernel. The kernel detected the card but doesn't have a driver for it. The LFS boot script tries to bring up the network but fails because of this.
Recompile your kernel with the proper driver, either built in or
as a module. If you compiled the network driver as a module, then
also adjust /etc/modules.conf to alias the network card module as
eth0; for example: alias eth0 8139too
. If you don't know
which network card you have, you can use dmesg
,
/proc/pci or lspci
to find out.
And, udev may rename your network devices. For example, eth0
may be renamed to enp4s0. You can run ip link
command after booting the LFS system, and examine the output to
know the name of your network devices.
It may be a bug in the firmware (BIOS) or the drivers in kernel. Some hardware vendors tend to use Windows-specific hacks in their BIOS, which is misinterpreted by Linux kernel and causing this kind of issue.
If you see message like this but your system functions
normally, you can ignore it. If the system malfunctions, you can
try the combinations of several kernel options to workaround:
irqpoll
, noapic
, pci=nocrs
,
and i8042.nopnp=1
.
And, you can try ACPI DSDT override if you really understand it.
You can always report this kind of issue to Kernel bug tracker, no matter if it's a BIOS bug. The kernel developers want to make Linux runable even if the BIOS has this kind of bug.
If the LFS system is slower than another distro but not much slower, it's normal. We focus on building a Linux system from source code and we do nothing to tweak the system for marginal performance improvements. The other distros may enable additional compiler optimizations, tune kernel options (via sysctl), or use other approaches to squeeze more performance out. And, LFS uses a latest GCC release which is likely slower than an old GCC release. A new GCC release often attempts to optimize the target code more heavily (these optimizations will slow down the compiliation, but hopefully make the compiled program faster). So LFS will take more time building a large package (like Linux kernel).
But, if the LFS system is very slow (for example, takes
5 hours to build a kernel while the host distro needs only one
hour to build the same kernel with exactly same configuration),
it likely indicates a CPU dynamic frequency scaling issue. You
can monitor the value of "cpu MHz" in /proc/cpuinfo
to see if your CPU is running at a reasonable frequency while a
workload (like, building the kernel) is running on the CPU.
If your CPU is running at a much lower frequency than expected (an Intel Core i3 building the kernel but running at only 800 MHz is definitely too slow), try to adjust the setting of "Default CPUFreq governor" in the kernel configuration and rebuild the kernel. The ideal setting should limit the CPU at a low frequency when the system is idle, but boost it to the maximum performance while a workload is running.
Note that a governor may behave differently on different CPUs. For example, the "powersave" governor may work fine for one CPU model, but lock another CPU at 800 MHz no matter if there is a workload running. If you can't (or don't want to spend too much time to) find an ideal setting, use the "performance" governor.
On a modern Intel or AMD processor, the "energy performance
preference" may also have a significant impact on the performance
The default setting is usually "balance performance" which may
severely throttle down the performance, especially when the CPU
has many cores but only a few cores are utilized (for example,
when measuring the SBU with make -j1
on a
Core i9-13900K). The easiest way to manange the energy performance
preference is via power-profiles-daemon, read
power-profiles-daemon (SysV) or
power-profiles-daemon (Systemd) for how to install and
use it. Alternatively you can try to change this setting via
the BIOS if a configuration entry is provided.