Bryan R Hinton
Embedded systems
, Linux,
memory
, and meaning
Mittwoch, 11. März 2026
Out of Deep Respect: 1700 Years of Jewish Life
Montag, 9. März 2026
The Secret Life of Ordinary Things: Mapping the Hidden Colours of Blood
The content on this blog represents my personal exploration of computational image processing techniques and is entirely separate from my professional work. This is not AI or generative AI. All opinions, experiments, and analyses are my own.
The techniques discussed here are presented for educational and research purposes only. While I strive for accuracy, this is a space for experimentation and discussion at the forefront of forensic image analysis, not a substitute for peer-reviewed research or certified forensic methodology, even though these emerging techniques are generating more accurate results than many traditional methods.
The Setup
There are some things that stay quiet for a very long time, waiting for someone to finally understand their language. It feels a bit like finding a letter that was never meant to be read, but now that I have seen it, I cannot look away. I wanted to use every tool I have to find the light still hidden in those shadows, to show that even the smallest spark from the past refuses to be completely extinguished.
The Chemical Chain of Decay
Aged blood detection relies on the fact that hemoglobin goes through a predictable degradation chain, with each stage featuring distinct light absorption signatures:
- Fresh Blood: Contains oxyhemoglobin with strong absorption peaks at roughly 415nm (the Soret band), 540nm, and 577nm.
- The First Shift: Within hours, it deoxygenates, shifting the 540nm and 577nm doublet into a single broad absorption around 555nm.
- Oxidation: Over days to weeks, it oxidizes to methemoglobin, revealing a distinct peak at roughly 630nm that fresh blood lacks.
- Archival Age: Over months to years, it degrades into hemichrome and hematin. These highly stable end products cause the Soret band to shift drastically from 415nm down toward 405nm.
Algorithmic Detection Indices
To detect this sample, the script computes several specific indices that target these aged spectral signatures rather than fresh ones:
- NDBI (580nm vs 630nm ratio): Fresh blood absorbs at 580nm but not 630nm, while aged blood absorbs at both, isolating the methemoglobin peak.
- Soret Ratio (415nm vs 630nm): This catches the relative shift between the two major absorption features as the sample ages.
- Met Index (630nm / 540nm): This directly measures the methemoglobin concentration relative to the other forms.
Bridging Algorithms and Optical Hardware
While my MST and MAMBA models are incredible at hyperspectral reconstruction (turning 3 RGB channels into 31 spectral bands), there is a computational catch. These models are often trained on natural scenes and have never seen aged hemoglobin spectra. Left alone, the algorithm might hallucinate a spectral shape that looks plausible but is physically wrong.
To solve this and capture serious forensic data, the neural network must be ground truthed using actual narrowband measurements. To build this pipeline, I used reference data captured by a NoIR dual camera setup equipped with specific bandpass filters, particularly a 630nm filter, to physically isolate the methemoglobin peak and verify the physics.
The Final False Color Visualization
By anchoring the reconstructed datacube with real narrowband filter data, the pipeline performs a flawless spectral classification. The false color gradient (the glowing "Inferno" scale seen here) is the visual map of those indices at work. The bright yellow "stars" represent dense, fossilized clusters of hemichrome and hematin, while the sweeping purple to orange background maps the physical drying gradient of the original serum.
Montag, 16. Februar 2026
spektraler zeuge: epr-paare und die physik des lichts
In 1935, Albert Einstein and his colleagues Boris Podolsky and Nathan Rosen published a seminal paper that challenged the completeness of quantum mechanics.1 They introduced the concept of EPR pairs to describe quantum entanglement, where particles remain inextricably linked, their states correlated regardless of spatial separation.
It is the quintessential example of quantum entanglement. An EPR pair is created when two particles are born from a single, indivisible quantum event, like the decay of a parent particle.
This process "bakes in" a shared quantum reality where only the joint state of the pair is defined, governed by conservation laws such as spin summing to zero. As a result, the individual state of each particle is indeterminate, yet their fates are perfectly correlated.
Measuring one particle (e.g., finding its spin "up") instantaneously determines the state of its partner (spin "down"), regardless of the distance separating them. This "spooky action at a distance," as Einstein called it, revealed that particles could share hidden correlations across space that are invisible to any local measurement of one particle alone. While Einstein used this idea to argue quantum theory was incomplete, later work by John Bell2 and experiments by Alain Aspect3 confirmed this entanglement as a fundamental, non-classical feature of nature.
The EPR–Spectral Analogy: Hidden Correlations
Quantum Physics (1935)
EPR Pairs: Particles share non-local entanglement. Their quantum states are correlated across space. Measuring one particle gives random results; correlation only appears when comparing both.
Spectral Imaging (Today)
Spectral Pairs: Materials share spectral signatures. Their reflective properties are correlated across wavelength. The correlation is invisible to trichromatic (RGB) vision.
↓
Mathematical Reconstruction
↓
Reveals Hidden Correlations
Key Insight: Both quantum entanglement and material spectroscopy require looking beyond direct observation through mathematical analysis to reveal a deeper, hidden layer of correlation.
While the EPR debate centered on the foundations of quantum mechanics, its core philosophy, that direct observation can miss profound hidden relationships, resonates deeply with modern imaging. Just as the naked eye perceives only a fraction of the electromagnetic spectrum, standard RGB sensors discard the high-dimensional "fingerprint" that defines the chemical and physical properties of a subject. Today, we resolve this limitation through multispectral imaging. By capturing the full spectral power distribution of light, we can mathematically reconstruct the invisible data that exists between the visible bands, revealing hidden correlations across wavelength, just as the analysis of EPR pairs revealed hidden correlations across space.
Silicon Photonic Architecture: The 48MP Foundation
The realization of this physics in modern hardware is constrained by the physical dimensions of the semiconductor used to capture it. The interaction of incident photons with the silicon lattice, generating electron–hole pairs, is the primary data acquisition step for any spectral analysis.
Sensor Architecture: Sony IMX803
The core of this pipeline is the Sony IMX803 sensor. Contrary to persistent rumors of a 1‑inch sensor, this is a 1/1.28‑inch type architecture, optimized for high-resolution radiometry.
Active Sensing Area: Approximately \(9.8 \text{ mm} \times 7.3 \text{ mm}\). This physical limitation is paramount, as the sensor area is directly proportional to the total photon flux the device can integrate, setting the fundamental Signal‑to‑Noise Ratio (SNR) limit.
Pixel Pitch: The native photodiode size is \(1.22 \, \mu\text{m}\). In standard operation, the sensor utilizes a Quad‑Bayer color filter array to perform pixel binning, resulting in an effective pixel pitch of \(2.44 \, \mu\text{m}\).
Mode Selection
The choice between binned and unbinned modes depends on the analysis requirements:
Binned mode (12MP, 2.44 µm effective pitch): Superior for low‑light conditions and spectral estimation accuracy. By summing the charge from four photodiodes, the signal increases by a factor of 4, while read noise increases only by a factor of 2, significantly boosting the SNR required for accurate spectral estimation.
Unbinned mode (48MP, 1.22 µm native pitch): Optimal for high‑detail texture correlation where spatial resolution drives the analysis, such as resolving fine fiber patterns in historical documents or detecting micro‑scale material boundaries.
The Optical Path
The light reaching the sensor passes through a 7‑element lens assembly with an aperture of ƒ/1.78. It is critical to note that "Spectral Fingerprinting" measures the product of the material's reflectance \(R(\lambda)\) and the lens's transmittance \(T(\lambda)\). Modern high‑refractive‑index glass absorbs specific wavelengths in the near‑UV (less than 400 nm), which must be accounted for during calibration.
The Digital Container: DNG 1.7 and Linearity
The accuracy of computational physics depends entirely on the integrity of the input data. The Adobe DNG 1.7 specification provides the necessary framework for scientific mobile photography by strictly preserving signal linearity.
Scene‑Referred Linearity
Apple ProRAW utilizes the Linear DNG pathway. Unlike standard RAW files, which store unprocessed mosaic data, ProRAW stores pixel values after demosaicing but before non‑linear tone mapping. The data remains scene‑referred linear, meaning the digital number stored is linearly proportional to the number of photons collected (\(DN \propto N_{photons}\)). This linearity is a prerequisite for the mathematical rigor of Wiener estimation and spectral reconstruction.
The ProfileGainTableMap
A key innovation in DNG 1.7 is the ProfileGainTableMap (Tag 0xCD2D). This tag stores a spatially varying map of gain values that represents the local tone mapping intended for display.
Scientific Stewardship: By decoupling the "aesthetic" gain map from the "scientific" linear data, the pipeline can discard the gain map entirely. This ensures that the spectral reconstruction algorithms operate on pure, linear photon counts, free from the spatially variant distortions introduced by computational photography.
Algorithmic Inversion: From 3 Channels to 16 Bands
Recovering a high‑dimensional spectral curve \(S(\lambda)\) (e.g., 16 channels from 400 nm to 700 nm) from a low‑dimensional RGB input is an ill‑posed inverse problem. While traditional methods like Wiener Estimation provide a baseline, modern high‑end hardware enables the use of advanced Deep Learning architectures.
Wiener Estimation (The Linear Baseline)
The classical approach utilizes Wiener Estimation to minimize the mean square error between the estimated and actual spectra:
This method generates the initial 16‑band approximation from the 3‑channel input.
State‑of‑the‑Art: Transformers and Mamba
For high‑end hardware environments, we can utilize predictive neural architectures that leverage spectral‑spatial correlations to resolve ambiguities.
MST++ (Spectral‑wise Transformer): The MST++ (Multi‑stage Spectral‑wise Transformer) architecture represents a significant leap in accuracy. Unlike global matrix methods, MST++ utilizes Spectral‑wise Multi‑head Self‑Attention (S‑MSA). It calculates attention maps across the spectral channel dimension, allowing the model to learn complex non‑linear correlations between texture and spectrum. Hardware Demand: The attention mechanism scales quadratically \(O(N^2)\), requiring significant GPU memory (VRAM) for high‑resolution images. This computational intensity necessitates powerful dedicated hardware to process the full data arrays.
MSS‑Mamba (Linear Complexity): The MSS‑Mamba (Multi‑Scale Spectral‑Spatial Mamba) model introduces Selective State Space Models (SSM) to the domain. It discretizes the continuous state space equation into a recurrent form that can be computed with linear complexity \(O(N)\). The Continuous Spectral‑Spatial Scan (CS3) strategy integrates spatial neighbors and spectral channels simultaneously, effectively "reading" the molecular composition in a continuous stream.
Computational Architecture: The Linux Python Stack
Achieving multispectral precision requires a robust, modular architecture capable of handling massive arrays across 16 dimensions. The implementation relies on a heavy Linux‑based Python stack designed to run on high‑end hardware.
Ingestion and Processing: We can utilize rawpy (a LibRaw wrapper) for the low‑level ingestion of ProRAW DNG files, bypassing OS‑level gamma correction to access the linear 12‑bit data directly. NumPy engines handle the high‑performance matrix algebra required to expand 3‑channel RGB data into 16‑band spectral cubes.
Scientific Analysis: Scikit‑image and SciPy are employed for geometric transforms, image restoration, and advanced spatial filtering. Matplotlib provides the visualization layer for generating spectral signature graphs and false‑color composites.
Data Footprint: The scale of this operation is significant. A single 48.8 MP image converted to floating‑point precision results in massive file sizes. Intermediate processing files often exceed 600 MB for a single 3‑band layer. When expanded to a full 16‑band multispectral cube, the storage and I/O requirements scale proportionally, necessitating the stability and memory management capabilities of a Linux environment.
The Spectral Solution
When analyzed through the 16‑band multispectral pipeline:
| Spectral Feature | Ultramarine (Lapis Lazuli) | Azurite (Copper Carbonate) |
|---|---|---|
| Primary Reflectance Peak | Approximately 450–480 nm (blue‑violet region) | Approximately 470–500 nm with secondary green peak at 550–580 nm |
| UV Response (below 420 nm) | Minimal reflectance, strong absorption | Moderate reflectance, characteristic of copper minerals |
| Red Absorption (600–700 nm) | Moderate to strong absorption | Strong absorption, typical of blue pigments |
| Characteristic Features | Sharp reflectance increase at 400–420 nm (violet edge) | Broader reflectance curve with copper signature absorption bands |
Note: Spectral values are approximate and can vary based on particle size, binding medium, and aging.
Completing the Picture
The successful analysis of complex material properties relies on a convergence of rigorous physics and advanced computation.
Photonic Foundation: The Sony IMX803 provides the necessary high‑SNR photonic capture, with mode selection (binned vs. unbinned) driven by the specific analytical requirements of each examination.
Data Integrity: DNG 1.7 is the critical enabler, preserving the linear relationship between photon flux and digital value while sequestering non‑linear aesthetic adjustments in metadata.
Algorithmic Precision: While Wiener estimation serves as a fast approximation, the highest fidelity is achieved through Transformer (MST++) and Mamba‑based architectures. These models disentangle the complex non‑linear relationships between visible light and material properties, effectively generating 16 distinct spectral bands from 3 initial channels.
Historical Continuity: The EPR paradox of 1935 revealed that quantum particles share hidden correlations across space, correlations invisible to local measurement but real nonetheless. Modern spectral imaging reveals an analogous truth: materials possess hidden correlations across wavelength, invisible to trichromatic vision but accessible through mathematical reconstruction. In both cases, completeness requires looking beyond what direct observation provides.
This synthesis of hardware specification, file format stewardship, and deep learning reconstruction defines the modern standard for non‑destructive material analysis — a spectral witness to what light alone cannot tell us.
And what about the paint? Here is a physical sample: pigment, substrate, history compressed into matter. Light passes through it, scatters from it, carries fragments of its story — yet the full truth remains hidden until we choose to look deeper. Every layer, every faded stroke, every chemical trace is a silent archive. We are not just observers; we are custodians of that archive. When we build tools to see beyond the visible, we are not merely extending sight — we are accepting a quiet responsibility: to bear witness honestly, to preserve what time would erase, to honor what has been made and endured.
Light can expose structure.
It cannot carry history.
That part is on us.
We can choose to let the machines we build serve memory rather than erasure, dignity rather than classification, truth rather than convenience. The past does not ask for perfection — it asks only that we refuse to let it be forgotten. In every reconstruction, in every layer we uncover, we have the chance to listen again to what was silenced. That is not just engineering. That is the work of being human.
References
1 Einstein, A., Podolsky, B., & Rosen, N. (1935). Can Quantum‑Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47(10), 777–780.
2 Bell, J. S. (1964). On the Einstein Podolsky Rosen paradox. Physics Physique Физика, 1(3), 195–200.
3 Aspect, A., Dalibard, J., & Roger, G. (1982). Experimental Test of Bell's Inequalities Using Time‑Varying Analyzers. Physical Review Letters, 49(25), 1804–1807.
4. Yuze Zhang1, Lingjie Li2, 4 Qiuzhen Lin11, Zhong Ming1, Fei Yu1, Victor C. M. Leung1. M3SR: Multi-Scale Multi-Perceptual Mamba for Efficient Spectral Reconstruction
5. Mengjie Qin1,2, Yuchao Feng1,2, Zongliang Wu1, Yulun Zhang3, Xin Yuan1*: Detail Matters: Mamba-Inspired Joint Unfolding Network for
Snapshot Spectral Compressive Imaging
6. Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Radu Timofte, and Luc Van Gool. MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction
7. Yapeng Li, Yong Luo, Lefei Zhang, Zengmao Wang, Bo Du. MambaHSI: Spatial-Spectral Mamba for Hyperspectral Image Classification
Bryan R Hinton
bryan (at) bryanhinton.com
Freitag, 16. Januar 2026
Die Ungebrochene Identität: Quantensichere Resistenz
Gedächtnis bedeutet, die Unveränderlichkeit der Wahrheit über die Zeit zu gewährleisten. In der physischen Welt nutzen wir Archive, um unsere Geschichten zu bewahren. In der digitalen Welt verwenden wir Kryptografie, um Identität, Urheberschaft und Vertrauen zu schützen.
Eine neue Bedrohung durch Quantencomputer fordert nun diese Grundlage heraus. In großem Maßstab wird sie in der Lage sein, die kryptografischen Aufzeichnungen zu löschen oder zu fälschen, die unser digitales Leben prägen.
Um die Integrität des kollektiven Gedächtnisses zu schützen und zu verhindern, dass zukünftige Angreifer Identitäten stehlen, habe ich frühere kryptografische Standards hinter mir gelassen und implementiere heute die höchste verfügbare Sicherheitsstufe: Post-Quanten-Technologie. Die doppelte Bedrohung: Shor und Grover
Quantencomputing stellt zwei unterschiedliche mathematische Bedrohungen für moderne Kryptografie dar. Um den Übergang zu Post-Quanten-Standards zu verstehen, ist es essenziell, beide zu kennen.
Shors Algorithmus: Der Public-Key-Zerstörer
Shors Algorithmus stellt die existenzielle Bedrohung dar. Er löst effizient die Probleme der ganzzahligen Faktorisierung und diskreten Logarithmen, die fast alle klassischen Public-Key-Kryptosysteme stützen, einschließlich RSA, Diffie-Hellman und elliptischen Kurven (ECC). Dies ist keine Schwächung, sondern ein vollständiger Bruch. Ein ausreichend leistungsfähiger Quantencomputer kann private Schlüssel aus öffentlichen Schlüsseln ableiten und untergräbt damit fundamentale Identitätssysteme.
Grover's Algorithmus: Der symmetrische Komprimierer
Grover's Algorithmus zielt auf symmetrische Kryptografie und Hash-Funktionen ab. Er bietet eine quadratische Beschleunigung für Brute-Force-Suchen und halbiert effektiv die Sicherheitsstärke eines Schlüssels. Daher ist AES-256 so entscheidend: Selbst nach Grovers Reduktion bietet es noch 128 Bit effektive Sicherheit, die praktisch unknackbar sind.
Die praktische Konsequenz: Jetzt speichern, später entschlüsseln
Die unmittelbarste Gefahr ist der SNDL-Angriff (Store Now, Decrypt Later). Verschlüsselter Datenverkehr, Identitätsnachweise, Zertifikate und Signaturen können heute abgefangen werden, während klassische Kryptografie noch gültig ist, und unbegrenzt gespeichert werden. Sobald Quantentechnologie ausgereift ist, können diese Archive nachträglich entschlüsselt oder gefälscht werden. Wenn unsere kryptografischen Grundlagen versagen, verlieren wir auch die Fähigkeit, unsere eigene digitale Geschichte zu dokumentieren.
Jenseits veralteter Standards: Warum ML-DSA-87
Jahrelang war elliptische Kurvenkryptografie, insbesondere P-384 (ECDSA), der Goldstandard in Hochsicherheitsumgebungen. Während P-384 etwa 192 Bit klassische Sicherheit bietet, hat es keinerlei Widerstand gegen Shors Algorithmus. Es wurde für eine klassische Welt entwickelt, und diese Welt geht zu Ende.
Daher habe ich ML-DSA-87 für Root-CA- und Signieroperationen implementiert. ML-DSA-87 ist die höchste Sicherheitsstufe moderner gitterbasierter Standards (Kategorie 5), rechnerisch äquivalent zu AES-256. Die Wahl dieser Stufe statt des verbreiteten ML-DSA-65 stellt sicher, dass die Identität meines Netzwerks mit dem heute größtmöglichen Sicherheitsspielraum aufgebaut ist.
Hardwarerealität: AArch64 und die PQC-Last
Post-Quanten-Kryptografie ist nicht länger theoretisch. Sie ist jetzt einsetzbar, sogar auf Routern und Mobilgeräten. Ich betreibe einen angepassten OpenSSL-3.5.0-Build auf einer AArch64 MediaTek Filogic 830/880-Plattform. Dieser SoC ist ungewöhnlich gut für Post-Quanten-Workloads geeignet.
Vektorskalierung mit NEON
ML-KEM und ML-DSA basieren stark auf Polynomarithmetik. ARM-NEON-Vektorbefehle ermöglichen die parallele Ausführung dieser Operationen und reduzieren so die TLS-Handshake-Latenz selbst bei großen PQ-Schlüsselmaterialien erheblich.
Speichereffizienz
Post-Quanten-Schlüssel sind groß. Ein öffentlicher ML-KEM-1024-Schlüssel umfasst 1568 Bytes, verglichen mit 49 Bytes für P-384. Der 64-Bit-Adressraum von AArch64 ermöglicht eine effiziente Verwaltung dieser Puffer und vermeidet Fragmentierungsprobleme älterer Architekturen.
Technische Verifikation: Post-Quanten-CLI-Prüfungen
Nach Installation des angepassten Toolchains auf dem AArch64-Zielsystem kann der Post-Quanten-Stack direkt verifiziert werden.
KEM-Verifikation
openssl list -kem-algorithms
Erwartete Ausgabe:
ml-kem-1024
secp384r1mlkem1024 (high-security hybrid)
Signaturverifikation
openssl list -signature-algorithms | grep -i ml
Erwartete Ausgabe:
ml-dsa-87 (256-bit security)
Das Vorhandensein dieser Algorithmen bestätigt, dass die Plattform sowohl Post-Quanten-Schlüsselaustausch (ML-KEM-1024) als auch quantenresistente Signaturen (ML-DSA-87) unterstützt.
Zusammenfassung: Mein AArch64-Post-Quanten-Stack
- Bibliothek: OpenSSL 3.5.4 (angepasster AArch64-Build)
- SoC: MediaTek Filogic 830 / 880
- Architektur: ARMv8-A (AArch64)
- Schlüsselaustausch: ML-KEM-1024 + Hybride
- Identität & Signatur: ML-DSA-87
- Sicherheitsstufe: Stufe 5 (quantenbereit)
- Status: Produktionsreif
Durch den direkten Wechsel zu ML-KEM-1024 und ML-DSA-87 habe ich die veralteten Engpässe des letzten Jahrzehnts umgangen. Mein Netzwerk bereitet sich nicht mehr auf den Quantenübergang vor - es hat ihn bereits abgeschlossen. Der Rest der Industrie wird folgen.
Dienstag, 25. November 2025
RK3588 ARM Inbetriebnahme: U-Boot, Kernel und Signalintegrität
Der RK3588 SoC verfügt über eine Quad-Core Arm Cortex-A76/A55 CPU, eine Mali-G610 GPU und eine hochflexible I/O-Architektur, die ihn ideal für eingebettete Linux SBCs wie den Radxa Rock 5B+ macht.
Ich habe die Inbetriebnahme dieser Plattform erforscht und dokumentiert, einschließlich meiner Beiträge zu u-boot und dem Linux-Kernel, der Entwicklung des Device Trees sowie der Werkzeuge für reproduzierbare Builds und die Validierung der Signalintegrität. Der Großteil dieser Arbeit befindet sich noch in der aktiven Entwicklungsphase und der frühen Vorbereitungsphase für die Veröffentlichung im Upstream-Projekt.
Ich veröffentliche hier meine Notizen, Messungen und Inbetriebnahme-Artefakte im Laufe der Arbeit, während die aktive u-boot- und Kernelentwicklung einschließlich Patch-Iteration, Test-Builds und Branch-Historie in separaten Arbeits-Repositories gepflegt wird:
Signalanalyse / Bring-Up-Repository: https://github.com/brhinton/signal-analysis
Das Repository umfasst derzeit (und es werden ständig weitere hinzugefügt):
- Device-Tree-Quellen und Rock 5B+ Board-Aktivierung
- UART-Signalintegritätsmessungen mit 1,5 Mbit/s am SoC-Pad
- Anleitung zum Erstellen von Kernel, Bootloader und Debugging-Setup
- Frühe Patch-Workflows und Upstream-Vorbereitungsnotizen
Zusätzliche Arbeiten an U-Boot und dem Linux-Kernel, einschließlich Mainline-Test-Builds, Funktionsentwicklung, Rebase-Updates und laufenden Patch-Serien, werden in separaten Arbeits-Repositories verwaltet. Dieses Repository dient als zentraler Ort für Messungen, Dokumentation und Inbetriebnahmehinweise auf Boardebene.
Dies ist ein fortlaufendes, noch in Entwicklung befindliches technisches Projekt, und ich werde die Repositories aktualisieren, sobald zusätzliche Messungen, Boards und für die Upstream-Entwicklung geeignete Änderungen vorbereitet sind.
Sonntag, 4. August 2024
arch linux uefi with dm-crypt and uki
Arch Linux is known for its high level of customization, and configuring LUKS2 and LVM is a straightforward process. This guide provides a set of instructions for setting up an Arch Linux system with the following features:
- Root file system encryption using LUKS2.
- Logical Volume Management (LVM) for flexible storage management.
- Unified Kernel Image (UKI) bootable via UEFI.
- Optional: Detached LUKS header on external media for enhanced security.
Prerequisites
- A bootable Arch Linux ISO.
- An NVMe drive (e.g.,
/dev/nvme0n1). - (Optional) A microSD card or other external medium for the detached LUKS header.
Important Considerations
- Data Loss: The following procedure will erase all data on the target drive. Back up any important data before proceeding.
- Secure Boot: This guide assumes you may want to use hardware secure boot.
- Detached LUKS Header: Using a detached LUKS header on external media adds a significant layer of security. If you lose the external media, you will lose access to your encrypted data.
- Swap: This guide uses a swap file. You may also use a swap partition if desired.
Step-by-Step Instructions
-
Boot into the Arch Linux ISO:
Boot your system from the Arch Linux installation media.
-
Set the System Clock:
# timedatectl set-ntp true -
Prepare the Disk:
- Identify your NVMe drive (e.g.,
/dev/nvme0n1). Uselsblkto confirm. - Wipe the drive:
# wipefs --all /dev/nvme0n1 - Identify your NVMe drive (e.g.,
- Create an EFI System Partition (ESP):
- Create a partition for the encrypted volume:
-
Set up LUKS2 Encryption:
Encrypt the second partition using LUKS2. This example uses
aes-xts-plain64andserpent-xts-plainciphers, and SHA512 for the hash. Adjust as needed.# cryptsetup luksFormat --cipher aes-xts-plain64 \ --keyslot-cipher serpent-xts-plain --keyslot-key-size 512 \ --use-random -S 0 -h sha512 -i 4000 /dev/nvme0n1p2--cipher: Specifies the cipher for data encryption.--keyslot-cipher: Specifies the cipher used to encrypt the key.--keyslot-key-size: Specifies the size of the key slot.-S 0: Disables sparse headers.-h: Specifies the hash function.-i: Specifies the number of iterations.
Open the encrypted partition:
# cryptsetup open /dev/nvme0n1p2 root -
Create the File Systems and Mount:
Create an ext4 file system on the decrypted volume:
# mkfs.ext4 /dev/mapper/rootMount the root file system:
# mount /dev/mapper/root /mntCreate and mount the EFI System Partition:
# mkfs.fat -F32 /dev/nvme0n1p1 # mount --mkdir /dev/nvme0n1p1 /mnt/efiCreate and enable a swap file:
# dd if=/dev/zero of=/mnt/swapfile bs=1M count=8000 status=progress # chmod 600 /mnt/swapfile # mkswap /mnt/swapfile # swapon /mnt/swapfile -
Install the Base System:
Use
pacstrapto install the necessary packages:# pacstrap -K /mnt base base-devel linux linux-hardened \ linux-hardened-headers linux-firmware apparmor mesa \ xf86-video-intel vulkan-intel git vi vim ukify -
Generate the fstab File:
# genfstab -U /mnt >> /mnt/etc/fstab -
Chroot into the New System:
# arch-chroot /mnt -
Configure the System:
Set the timezone:
# ln -sf /usr/share/zoneinfo/UTC /etc/localtime # hwclock --systohcUncomment
en_US.UTF-8 UTF-8in/etc/locale.genand generate the locale:# sed -i 's/#'"en_US.UTF-8"' UTF-8/'"en_US.UTF-8"' UTF-8/g' /etc/locale.gen # locale-gen # echo 'LANG=en_US.UTF-8' > /etc/locale.conf # echo "KEYMAP=us" > /etc/vconsole.confSet the hostname:
# echo myhostname > /etc/hostname # cat <<EOT >> /etc/hosts 127.0.0.1 myhostname ::1 localhost 127.0.1.1 myhostname.localdomain myhostname EOTConfigure
mkinitcpio.confto include theencrypthook:# sed -i 's/HOOKS.*/HOOKS=(base udev autodetect modconf kms \ keyboard keymap consolefont block encrypt filesystems resume fsck)/' \ /etc/mkinitcpio.confCreate the initial ramdisk:
# mkinitcpio -PInstall the bootloader:
# bootctl installSet the root password:
# passwdInstall microcode and efibootmgr:
# pacman -S intel-ucode efibootmgrGet the swap offset:
# swapoffset=`filefrag -v /swapfile | awk '/\s+0:/ {print $4}' | \ sed -e 's/\.\.$//'`Get the UUID of the encrypted partition:
# blkid -s UUID -o value /dev/nvme0n1p2Create the EFI boot entry. Replace
<UUID OF CRYPTDEVICE>with the actual UUID:# efibootmgr --disk /dev/nvme0n1p1 --part 1 --create --label "Linux" \ --loader /vmlinuz-linux --unicode "cryptdevice=UUID=<UUID OF CRYPTDEVICE>:root \ root=/dev/mapper/root resume=/dev/mapper/root resume_offset=$swapoffset \ rw initrd=\intel-ucode.img initrd=\initramfs-linux.img" --verboseConfigure the UKI presets:
# cat <<EOT >> /etc/mkinitcpio.d/linux.preset ALL_kver="/boot/vmlinuz-linux" ALL_microcode=(/boot/*-ucode.img) PRESETS=('default' 'fallback') default_uki="/efi/EFI/Linux/arch-linux.efi" default_options="--splash /usr/share/systemd/bootctl/splash-arch.bmp" fallback_uki="/efi/EFI/Linux/arch-linux-fallback.efi" fallback_options="-S autodetect" EOTCreate the UKI directory:
# mkdir -p /efi/EFI/LinuxConfigure the kernel command line:
# cat <<EOT >> /etc/kernel/cmdline cryptdevice=UUID=<UUID OF CRYPTDEVICE>:root root=/dev/mapper/root \ resume=/dev/mapper/root resume_offset=51347456 rw EOTBuild the UKIs:
# mkinitcpio -p linuxConfigure the kernel install layout:
# echo "layout=uki" >> /etc/kernel/install.conf -
Configure Networking (Optional):
Create a systemd-networkd network configuration file:
# cat <<EOT >> /etc/systemd/network/nic0.network [Match] Name=nic0 [Network] DHCP=yes EOT -
Install a Desktop Environment (Optional):
Install Xorg, Xfce, LightDM, and related packages:
# pacman -Syu # pacman -S xorg xfce4 xfce4-goodies lightdm lightdm-gtk-greeter \ libva-intel-driver mesa xorg-server xorg-xinit sudo # systemctl enable lightdm # systemctl start lightdm -
Enable Network Services (Optional):
# systemctl enable systemd-resolved.service # systemctl enable systemd-networkd.service # systemctl start systemd-resolved.service # systemctl start systemd-networkd.service -
Create a User Account:
Create a user account and add it to the
wheelgroup:# useradd -m -g wheel -s /bin/bash myusername -
Reboot:
Exit the chroot environment and reboot your system:
# exit # umount -R /mnt # reboot
# sgdisk /dev/nvme0n1 -n 1::+512MiB -t 1:EF00
# sgdisk /dev/nvme0n1 -n 2 -t 2:8300