The systems I am working with have 80 or more cores in them. I’ve recently had to investigate processes around core start up. Here are my notes.
Continue readingCategory Archives: Linux
Enabling ARM64 CPU Capabilities in the Linux Kernel
ARM64 design defines features long before there is a CPU that can implement those features. Since the ARM ecosystem is so varied, there are many different CPU designs out there with different capabilities. A general purpose linux Kernel build put out by a major distribution has to work across a wide array of chips by a large nuymber of vendors. Thus, there is an enumeration of the capabilities inside the Kernel and mechnism for describing how to probe each of these capabilities.
Continue readingFinding Linux Kernel Config options in menuconfig
We have reason to believe that we should not be setting CONFIG_EFI_DISABLE_RUNTIME=y In our Kernel configs. I want to perform a controlled expereient booting two Kernel builds, one with this option set and one with it disabled. Since I have the option set, building that Kernel is trivial.
make olddefconfig make -j$(nproc) rpm-pkg |
Now, to turn that option off, I could just edit the .config file. However, it is possible that there are other config options linked to that one, and there is logic to modify them together. I want to see what happens if I use make menuconfig to change the option to confirm (or deny) that only that option gets changed. But where do I find this option in the menu?
Continue readingBuilding and Running the Linux Kernel Selftests on AARCH64/ Fedora
I won’t go into checking out or building the Kernel, as that is covered elsewhere. Assuming you have a buildable Kernel, you can build the tests with:
make -C tools/testing/selftests |
But you are probably going to see errors like this:
ksm_tests.c:7:10: fatal error: numa.h: No such file or directory 7 | #include <numa.h> | ^~~~~~~~ compilation terminated. |
The userland test suites use several libraries and need headers to compile the tests that call those libraries. Here is the yum, line I ran to get the dependencies I needed for my system:
sudo yum install libmnl-devel fuse-devel numactl-devel libcap-ng-devel alsa-lib-devel |
With those installed, the make line succeeded.
Running the test like this CRASHED THE SYSTEM. Don’t do this.
make -C tools/testing/selftests run_tests |
A more sensible test to run is the example on the Docs page:
# make -C tools/testing/selftests TARGETS=ptrace run_tests make: Entering directory '/root/linux/tools/testing/selftests' make --no-builtin-rules ARCH=arm64 -C ../../.. headers_install make[1]: Entering directory '/root/linux' INSTALL ./usr/include make[1]: Leaving directory '/root/linux' make[1]: Entering directory '/root/linux/tools/testing/selftests/ptrace' make[1]: Nothing to be done for 'all'. make[1]: Leaving directory '/root/linux/tools/testing/selftests/ptrace' make[1]: Entering directory '/root/linux/tools/testing/selftests/ptrace' TAP version 13 1..3 # selftests: ptrace: get_syscall_info # TAP version 13 # 1..1 # # Starting 1 tests from 1 test cases. # # RUN global.get_syscall_info ... # # OK global.get_syscall_info # ok 1 global.get_syscall_info # # PASSED: 1 / 1 tests passed. # # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0 ok 1 selftests: ptrace: get_syscall_info # selftests: ptrace: peeksiginfo # PASS ok 2 selftests: ptrace: peeksiginfo # selftests: ptrace: vmaccess # TAP version 13 # 1..2 # # Starting 2 tests from 1 test cases. # # RUN global.vmaccess ... # # OK global.vmaccess # ok 1 global.vmaccess # # RUN global.attach ... # # attach: Test terminated by timeout # # FAIL global.attach # not ok 2 global.attach # # FAILED: 1 / 2 tests passed. # # Totals: pass:1 fail:1 xfail:0 xpass:0 skip:0 error:0 not ok 3 selftests: ptrace: vmaccess # exit=1 make[1]: Leaving directory '/root/linux/tools/testing/selftests/ptrace' make: Leaving directory '/root/linux/tools/testing/selftests' |
Next up is to write my own stub test.
Labeling a Linux Kernel RPM
You can use the Kernel build system to make your own RPMs using the the target:
make rpm-pkg
Continue readingA Non-authoritative history of Preemptive Multitasking in the personal computing world.
Back when machines only had one or two CPUs (still the case for embedded devices) the OS Kernel was responsible for making sure that the machine coule process more than one instruction “path” at a time. I started coding back on the Commodore 64, and there it was easy to lock up the machine: just run a program that does nothing. I’d have to look back at the Old Programmer’s Guide, but I am pretty sure that a program had to voluntarily give up the CPU if you wanted any form of multi-tasking.
The alternative is called “preemptive multitasking” where the hardware provides a mechanism that can call a controller function to switch tasks. The task running on the CPU is paused, the state is saved, and the controller function decides what to do next.
Continue readingLooking at ACPI PPTT from Userspace
The sys file system is used to expose Linux constructs to user space. One place we can see ACPI based information is in /sys/firmware/acpi
Continue readingACPI subsytem initialization
Many other modules might trigger ACPI device registration. This means the the basic ACPI subsystem has to be up and available before much of the Hardware is usable. Hence, we can see that the ACPI subsystem gets registered here. What I am not certain of is when does this code get called?
Continue readingACPI root pointer from UEFI System Table.
As I found out after I posted my lat entry , the correct way to find the Root pointer for the ACPI tables is to get it from the EFI System table. Where does that get set? Here’s the general flow: again, we start at init/main.c. start_kernel. However, the call is not in the ACPI code, but rather in setup_arch. The call chain goes
start-Kernel->setup_arch->efi_init->efi_get_fdt_params and that seems to pull it our of initial_boot_params. I can’t quite see where that is initialized. Yet. From context it looks like it is constructed out of the kernel command line parameters. Still learning….
Building Linux tip-of-tree on an Ampere based system
I have an Ampere Altra-Max/INGRASYS Yushan Server System running Centos 8 stream.
Because we are a chip manufacteror, we don’t sell end systems, we provide a reference platform that is a starting point for our customers to make a product. This leads to bizarre set of internal versus external names. One thing that you can rely on, however, is the identifier of the processor itself:
# cat /proc/cpuinfo
processor : 0
BogoMIPS : 50.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x3
CPU part : 0xd0c
CPU revision : 1
...
TO make this readable, use the utility lscpu:
[root@eng14sys-r
111 ~]# lscpu
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 1
Core(s) per socket: 80
Socket(s): 1
NUMA node(s): 1
Vendor ID: ARM
BIOS Vendor ID: Ampere(R)
Model: 1
Model name: Neoverse-N1
BIOS Model name: Ampere(R) Altra(R) Processor
Stepping: r3p1
CPU max MHz: 3000.0000
CPU min MHz: 1000.0000
BogoMIPS: 50.00
L1d cache: 64K
L1i cache: 64K
L2 cache: 1024K
NUMA node0 CPU(s): 0-79
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp ssbs
I want to build the latest Linus-repo Linux Kernel and run it on the server. Here’s the steps I went through.
Continue reading