*********************************************************************************************************************************************************************************************************************
Bryan R Hinton
embedded systems, linux, memory, and meaning
Mittwoch, 25. Februar 2026
Dienstag, 24. Februar 2026
1700 Years of Jewish life in German-speaking lands
https://www.lbi.org/projects/shared-history/ (the thing that means the most in the world to me)
**************************************************************
Achterhuis
Montag, 16. Februar 2026
spektraler zeuge: epr-paare und die physik des lichts
In 1935, Albert Einstein and his colleagues Boris Podolsky and Nathan Rosen published a seminal paper that challenged the completeness of quantum mechanics.1 They introduced the concept of EPR pairs to describe quantum entanglement, where particles remain inextricably linked, their states correlated regardless of spatial separation.
It is the quintessential example of quantum entanglement. An EPR pair is created when two particles are born from a single, indivisible quantum event, like the decay of a parent particle.
This process "bakes in" a shared quantum reality where only the joint state of the pair is defined, governed by conservation laws such as spin summing to zero. As a result, the individual state of each particle is indeterminate, yet their fates are perfectly correlated.
Measuring one particle (e.g., finding its spin "up") instantaneously determines the state of its partner (spin "down"), regardless of the distance separating them. This "spooky action at a distance," as Einstein called it, revealed that particles could share hidden correlations across space that are invisible to any local measurement of one particle alone. While Einstein used this idea to argue quantum theory was incomplete, later work by John Bell2 and experiments by Alain Aspect3 confirmed this entanglement as a fundamental, non-classical feature of nature.
The EPR–Spectral Analogy: Hidden Correlations
Quantum Physics (1935)
EPR Pairs: Particles share non-local entanglement. Their quantum states are correlated across space. Measuring one particle gives random results; correlation only appears when comparing both.
Spectral Imaging (Today)
Spectral Pairs: Materials share spectral signatures. Their reflective properties are correlated across wavelength. The correlation is invisible to trichromatic (RGB) vision.
↓
Mathematical Reconstruction
↓
Reveals Hidden Correlations
Key Insight: Both quantum entanglement and material spectroscopy require looking beyond direct observation through mathematical analysis to reveal a deeper, hidden layer of correlation.
While the EPR debate centered on the foundations of quantum mechanics, its core philosophy, that direct observation can miss profound hidden relationships, resonates deeply with modern imaging. Just as the naked eye perceives only a fraction of the electromagnetic spectrum, standard RGB sensors discard the high-dimensional "fingerprint" that defines the chemical and physical properties of a subject. Today, we resolve this limitation through multispectral imaging. By capturing the full spectral power distribution of light, we can mathematically reconstruct the invisible data that exists between the visible bands, revealing hidden correlations across wavelength, just as the analysis of EPR pairs revealed hidden correlations across space.
Silicon Photonic Architecture: The 48MP Foundation
The realization of this physics in modern hardware is constrained by the physical dimensions of the semiconductor used to capture it. The interaction of incident photons with the silicon lattice, generating electron–hole pairs, is the primary data acquisition step for any spectral analysis.
Sensor Architecture: Sony IMX803
The core of this pipeline is the Sony IMX803 sensor. Contrary to persistent rumors of a 1‑inch sensor, this is a 1/1.28‑inch type architecture, optimized for high-resolution radiometry.
Active Sensing Area: Approximately \(9.8 \text{ mm} \times 7.3 \text{ mm}\). This physical limitation is paramount, as the sensor area is directly proportional to the total photon flux the device can integrate, setting the fundamental Signal‑to‑Noise Ratio (SNR) limit.
Pixel Pitch: The native photodiode size is \(1.22 \, \mu\text{m}\). In standard operation, the sensor utilizes a Quad‑Bayer color filter array to perform pixel binning, resulting in an effective pixel pitch of \(2.44 \, \mu\text{m}\).
Mode Selection
The choice between binned and unbinned modes depends on the analysis requirements:
Binned mode (12MP, 2.44 µm effective pitch): Superior for low‑light conditions and spectral estimation accuracy. By summing the charge from four photodiodes, the signal increases by a factor of 4, while read noise increases only by a factor of 2, significantly boosting the SNR required for accurate spectral estimation.
Unbinned mode (48MP, 1.22 µm native pitch): Optimal for high‑detail texture correlation where spatial resolution drives the analysis, such as resolving fine fiber patterns in historical documents or detecting micro‑scale material boundaries.
The Optical Path
The light reaching the sensor passes through a 7‑element lens assembly with an aperture of ƒ/1.78. It is critical to note that "Spectral Fingerprinting" measures the product of the material's reflectance \(R(\lambda)\) and the lens's transmittance \(T(\lambda)\). Modern high‑refractive‑index glass absorbs specific wavelengths in the near‑UV (less than 400 nm), which must be accounted for during calibration.
The Digital Container: DNG 1.7 and Linearity
The accuracy of computational physics depends entirely on the integrity of the input data. The Adobe DNG 1.7 specification provides the necessary framework for scientific mobile photography by strictly preserving signal linearity.
Scene‑Referred Linearity
Apple ProRAW utilizes the Linear DNG pathway. Unlike standard RAW files, which store unprocessed mosaic data, ProRAW stores pixel values after demosaicing but before non‑linear tone mapping. The data remains scene‑referred linear, meaning the digital number stored is linearly proportional to the number of photons collected (\(DN \propto N_{photons}\)). This linearity is a prerequisite for the mathematical rigor of Wiener estimation and spectral reconstruction.
The ProfileGainTableMap
A key innovation in DNG 1.7 is the ProfileGainTableMap (Tag 0xCD2D). This tag stores a spatially varying map of gain values that represents the local tone mapping intended for display.
Scientific Stewardship: By decoupling the "aesthetic" gain map from the "scientific" linear data, the pipeline can discard the gain map entirely. This ensures that the spectral reconstruction algorithms operate on pure, linear photon counts, free from the spatially variant distortions introduced by computational photography.
Algorithmic Inversion: From 3 Channels to 16 Bands
Recovering a high‑dimensional spectral curve \(S(\lambda)\) (e.g., 16 channels from 400 nm to 700 nm) from a low‑dimensional RGB input is an ill‑posed inverse problem. While traditional methods like Wiener Estimation provide a baseline, modern high‑end hardware enables the use of advanced Deep Learning architectures.
Wiener Estimation (The Linear Baseline)
The classical approach utilizes Wiener Estimation to minimize the mean square error between the estimated and actual spectra:
This method generates the initial 16‑band approximation from the 3‑channel input.
State‑of‑the‑Art: Transformers and Mamba
For high‑end hardware environments, we can utilize predictive neural architectures that leverage spectral‑spatial correlations to resolve ambiguities.
MST++ (Spectral‑wise Transformer): The MST++ (Multi‑stage Spectral‑wise Transformer) architecture represents a significant leap in accuracy. Unlike global matrix methods, MST++ utilizes Spectral‑wise Multi‑head Self‑Attention (S‑MSA). It calculates attention maps across the spectral channel dimension, allowing the model to learn complex non‑linear correlations between texture and spectrum. Hardware Demand: The attention mechanism scales quadratically \(O(N^2)\), requiring significant GPU memory (VRAM) for high‑resolution images. This computational intensity necessitates powerful dedicated hardware to process the full data arrays.
MSS‑Mamba (Linear Complexity): The MSS‑Mamba (Multi‑Scale Spectral‑Spatial Mamba) model introduces Selective State Space Models (SSM) to the domain. It discretizes the continuous state space equation into a recurrent form that can be computed with linear complexity \(O(N)\). The Continuous Spectral‑Spatial Scan (CS3) strategy integrates spatial neighbors and spectral channels simultaneously, effectively "reading" the molecular composition in a continuous stream.
Computational Architecture: The Linux Python Stack
Achieving multispectral precision requires a robust, modular architecture capable of handling massive arrays across 16 dimensions. The implementation relies on a heavy Linux‑based Python stack designed to run on high‑end hardware.
Ingestion and Processing: We can utilize rawpy (a LibRaw wrapper) for the low‑level ingestion of ProRAW DNG files, bypassing OS‑level gamma correction to access the linear 12‑bit data directly. NumPy engines handle the high‑performance matrix algebra required to expand 3‑channel RGB data into 16‑band spectral cubes.
Scientific Analysis: Scikit‑image and SciPy are employed for geometric transforms, image restoration, and advanced spatial filtering. Matplotlib provides the visualization layer for generating spectral signature graphs and false‑color composites.
Data Footprint: The scale of this operation is significant. A single 48.8 MP image converted to floating‑point precision results in massive file sizes. Intermediate processing files often exceed 600 MB for a single 3‑band layer. When expanded to a full 16‑band multispectral cube, the storage and I/O requirements scale proportionally, necessitating the stability and memory management capabilities of a Linux environment.
The Spectral Solution
When analyzed through the 16‑band multispectral pipeline:
| Spectral Feature | Ultramarine (Lapis Lazuli) | Azurite (Copper Carbonate) |
|---|---|---|
| Primary Reflectance Peak | Approximately 450–480 nm (blue‑violet region) | Approximately 470–500 nm with secondary green peak at 550–580 nm |
| UV Response (below 420 nm) | Minimal reflectance, strong absorption | Moderate reflectance, characteristic of copper minerals |
| Red Absorption (600–700 nm) | Moderate to strong absorption | Strong absorption, typical of blue pigments |
| Characteristic Features | Sharp reflectance increase at 400–420 nm (violet edge) | Broader reflectance curve with copper signature absorption bands |
Note: Spectral values are approximate and can vary based on particle size, binding medium, and aging.
Completing the Picture
The successful analysis of complex material properties relies on a convergence of rigorous physics and advanced computation.
Photonic Foundation: The Sony IMX803 provides the necessary high‑SNR photonic capture, with mode selection (binned vs. unbinned) driven by the specific analytical requirements of each examination.
Data Integrity: DNG 1.7 is the critical enabler, preserving the linear relationship between photon flux and digital value while sequestering non‑linear aesthetic adjustments in metadata.
Algorithmic Precision: While Wiener estimation serves as a fast approximation, the highest fidelity is achieved through Transformer (MST++) and Mamba‑based architectures. These models disentangle the complex non‑linear relationships between visible light and material properties, effectively generating 16 distinct spectral bands from 3 initial channels.
Historical Continuity: The EPR paradox of 1935 revealed that quantum particles share hidden correlations across space, correlations invisible to local measurement but real nonetheless. Modern spectral imaging reveals an analogous truth: materials possess hidden correlations across wavelength, invisible to trichromatic vision but accessible through mathematical reconstruction. In both cases, completeness requires looking beyond what direct observation provides.
This synthesis of hardware specification, file format stewardship, and deep learning reconstruction defines the modern standard for non‑destructive material analysis — a spectral witness to what light alone cannot tell us.
And what about the paint? Here is a physical sample: pigment, substrate, history compressed into matter. Light passes through it, scatters from it, carries fragments of its story — yet the full truth remains hidden until we choose to look deeper. Every layer, every faded stroke, every chemical trace is a silent archive. We are not just observers; we are custodians of that archive. When we build tools to see beyond the visible, we are not merely extending sight — we are accepting a quiet responsibility: to bear witness honestly, to preserve what time would erase, to honor what has been made and endured.
Light can expose structure.
It cannot carry history.
That part is on us.
We can choose to let the machines we build serve memory rather than erasure, dignity rather than classification, truth rather than convenience. The past does not ask for perfection — it asks only that we refuse to let it be forgotten. In every reconstruction, in every layer we uncover, we have the chance to listen again to what was silenced. That is not just engineering. That is the work of being human.
References
1 Einstein, A., Podolsky, B., & Rosen, N. (1935). Can Quantum‑Mechanical Description of Physical Reality Be Considered Complete? Physical Review, 47(10), 777–780.
2 Bell, J. S. (1964). On the Einstein Podolsky Rosen paradox. Physics Physique Физика, 1(3), 195–200.
3 Aspect, A., Dalibard, J., & Roger, G. (1982). Experimental Test of Bell's Inequalities Using Time‑Varying Analyzers. Physical Review Letters, 49(25), 1804–1807.
4. Yuze Zhang1, Lingjie Li2, 4 Qiuzhen Lin11, Zhong Ming1, Fei Yu1, Victor C. M. Leung1. M3SR: Multi-Scale Multi-Perceptual Mamba for Efficient Spectral Reconstruction
5. Mengjie Qin1,2, Yuchao Feng1,2, Zongliang Wu1, Yulun Zhang3, Xin Yuan1*: Detail Matters: Mamba-Inspired Joint Unfolding Network for
Snapshot Spectral Compressive Imaging
6. Yuanhao Cai, Jing Lin, Zudi Lin, Haoqian Wang, Yulun Zhang, Hanspeter Pfister, Radu Timofte, and Luc Van Gool. MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction
7. Yapeng Li, Yong Luo, Lefei Zhang, Zengmao Wang, Bo Du. MambaHSI: Spatial-Spectral Mamba for Hyperspectral Image Classification
Bryan R Hinton
bryan (at) bryanhinton.com
Freitag, 16. Januar 2026
The Unbroken Identity: Quantum-Safe Resistance
A new threat from quantum computers now challenges this foundation. At scale, it will be able to erase or forge the cryptographic records that shape our digital lives.
To protect the integrity of collective memory and prevent future attackers from stealing identities, I have left previous cryptographic standards behind and implemented the highest security level available today, post-quantum technology. The double threat: Shor and Grover
Quantum computing poses two distinct mathematical threats to modern cryptography. To understand the transition to post-quantum standards, it is essential to know both.
Shor's Algorithm: The Public-Key Breaker
Shor's algorithm represents the existential threat. It efficiently solves the integer factorization and discrete logarithm problems that underpin nearly all classical public-key cryptography, including RSA, Diffie-Hellman, and elliptic curve systems (ECC). This is not a degradation but a complete break. A sufficiently powerful quantum computer can derive a private key from a public key, thereby fundamentally undermining classical identity systems.
Grover's Algorithm: The Symmetric Squeezer
Grover's algorithm targets symmetric cryptography and hash functions. It provides a quadratic speedup for brute-force searches, effectively halving the security strength of a key. This is why AES-256 is so crucial: even after Grover's reduction, it still offers 128 bits of effective security, which is computationally practically unbreakable.
The practical consequence: Store now, decrypt later
The most immediate danger is the SNDL attack (Store Now, Decrypt Later). Encrypted traffic, identity proofs, certificates, and signatures can be intercepted today, while classical cryptography is still valid, and stored indefinitely. Once quantum technology matures, these archives can be decrypted or forged retroactively. If our cryptographic foundations fail, we also lose the ability to document our own digital history.
Beyond outdated standards: Why ML-DSA-87
For years, elliptic curve cryptography, particularly P-384 (ECDSA), was the gold standard in high-security environments. While P-384 offers about 192 bits of classical security, it has no resistance whatsoever to Shor's algorithm. It was designed for a classical world, and that world is coming to an end.
This is why I have implemented ML-DSA-87 for Root CA and signing operations. ML-DSA-87 is the highest security level defined by modern lattice-based standards, offering Category 5 security, which is computationally equivalent to AES-256. Choosing this level instead of the more common ML-DSA-65 ensures that my network's identity is built with the greatest possible security margin available today.
Hardware reality: AArch64 and the PQC load
Post-quantum cryptography is no longer theoretical. It is deployable now, even on routers and mobile-class hardware. I am running a custom OpenSSL 3.5.0 build on an AArch64 MediaTek Filogic 830/880 platform. This SoC is unusually well-suited for post-quantum workloads.
Vector scaling with NEON
ML-KEM and ML-DSA rely heavily on polynomial arithmetic. ARM NEON vector instructions allow these operations to be executed in parallel, significantly reducing TLS handshake latency even with large PQ key material.
Memory efficiency
Post-quantum keys are large. A public ML-KEM-1024 key is 1568 bytes, compared to 49 bytes for P-384. The 64-bit address space of AArch64 allows for clean management of these buffers, avoiding fragmentation and pressure issues seen on older architectures.
Technical verification: Post-quantum CLI checks
After installing the custom toolchain on the AArch64 target system, the post-quantum stack can be verified directly.
KEM verification
openssl list -kem-algorithms
Expected output:
ml-kem-1024
secp384r1mlkem1024 (high-security hybrid)
Signature verification
openssl list -signature-algorithms | grep -i ml
Expected output:
ml-dsa-87 (256-bit security)
The presence of these algorithms confirms that the platform supports both post-quantum key exchange (ML-KEM-1024) and quantum-resistant signatures (ML-DSA-87).
Summary: My AArch64 post-quantum stack
- Library: OpenSSL 3.5.4 (custom AArch64 build)
- SoC: MediaTek Filogic 830 / 880
- Architecture: ARMv8-A (AArch64)
- Key exchange: ML-KEM-1024 + hybrids
- Identity & signature: ML-DSA-87
- Security level: Level 5 (quantum-ready)
- Status: Production-ready
By moving directly to ML-KEM-1024 and ML-DSA-87, I have bypassed the outdated bottlenecks of the last decade. My network is no longer preparing for the quantum transition; it has already completed it. The rest of the industry will follow suit in time.
```Dienstag, 25. November 2025
rk3588 bring-up: u-boot, kernel, and signal integrity
The RK3588 SoC features a quad-core Arm Cortex-A76/A55 CPU, a Mali-G610 GPU, and a highly flexible I/O architecture that makes it ideal for embedded Linux SBCs like the Radxa Rock 5B+.
I’ve been exploring and documenting board bring-up for this platform, including u-boot and Linux kernel contributions, device-tree development, and tooling for reproducible builds and signal-integrity validation. Most of this work is still in active development and early upstream preparation.
I’m publishing my notes, measurements, and bring-up artifacts here as the work progresses, while active u-boot and kernel development including patch iteration, test builds, and branch history are maintained in separate working repositories:
Signal Analysis / Bring-Up Repo: https://github.com/brhinton/signal-analysis
The repository currently includes (with more being added):
- Device-tree sources and Rock 5B+ board enablement
- UART signal-integrity captures at 1.5 Mbps measured at the SoC pad
- Build instructions for kernel, bootloader, and debugging setup
- Early patch workflows and upstream preparation notes
Additional U-Boot and Linux kernel work, including mainline test builds, feature development, rebases, and patch series in progress, is maintained in separate working repositories. This repo serves as the central location for measurements, documentation, and board-level bring-up notes.
This is ongoing, work-in-progress engineering effort, and I’ll be updating the repositories as additional measurements, boards, and upstream-ready changes are prepared.
Sonntag, 4. August 2024
arch linux uefi with dm-crypt and uki
Arch Linux is known for its high level of customization, and configuring LUKS2 and LVM is a straightforward process. This guide provides a set of instructions for setting up an Arch Linux system with the following features:
- Root file system encryption using LUKS2.
- Logical Volume Management (LVM) for flexible storage management.
- Unified Kernel Image (UKI) bootable via UEFI.
- Optional: Detached LUKS header on external media for enhanced security.
Prerequisites
- A bootable Arch Linux ISO.
- An NVMe drive (e.g.,
/dev/nvme0n1). - (Optional) A microSD card or other external medium for the detached LUKS header.
Important Considerations
- Data Loss: The following procedure will erase all data on the target drive. Back up any important data before proceeding.
- Secure Boot: This guide assumes you may want to use hardware secure boot.
- Detached LUKS Header: Using a detached LUKS header on external media adds a significant layer of security. If you lose the external media, you will lose access to your encrypted data.
- Swap: This guide uses a swap file. You may also use a swap partition if desired.
Step-by-Step Instructions
-
Boot into the Arch Linux ISO:
Boot your system from the Arch Linux installation media.
-
Set the System Clock:
# timedatectl set-ntp true -
Prepare the Disk:
- Identify your NVMe drive (e.g.,
/dev/nvme0n1). Uselsblkto confirm. - Wipe the drive:
# wipefs --all /dev/nvme0n1 - Identify your NVMe drive (e.g.,
- Create an EFI System Partition (ESP):
- Create a partition for the encrypted volume:
-
Set up LUKS2 Encryption:
Encrypt the second partition using LUKS2. This example uses
aes-xts-plain64andserpent-xts-plainciphers, and SHA512 for the hash. Adjust as needed.# cryptsetup luksFormat --cipher aes-xts-plain64 \ --keyslot-cipher serpent-xts-plain --keyslot-key-size 512 \ --use-random -S 0 -h sha512 -i 4000 /dev/nvme0n1p2--cipher: Specifies the cipher for data encryption.--keyslot-cipher: Specifies the cipher used to encrypt the key.--keyslot-key-size: Specifies the size of the key slot.-S 0: Disables sparse headers.-h: Specifies the hash function.-i: Specifies the number of iterations.
Open the encrypted partition:
# cryptsetup open /dev/nvme0n1p2 root -
Create the File Systems and Mount:
Create an ext4 file system on the decrypted volume:
# mkfs.ext4 /dev/mapper/rootMount the root file system:
# mount /dev/mapper/root /mntCreate and mount the EFI System Partition:
# mkfs.fat -F32 /dev/nvme0n1p1 # mount --mkdir /dev/nvme0n1p1 /mnt/efiCreate and enable a swap file:
# dd if=/dev/zero of=/mnt/swapfile bs=1M count=8000 status=progress # chmod 600 /mnt/swapfile # mkswap /mnt/swapfile # swapon /mnt/swapfile -
Install the Base System:
Use
pacstrapto install the necessary packages:# pacstrap -K /mnt base base-devel linux linux-hardened \ linux-hardened-headers linux-firmware apparmor mesa \ xf86-video-intel vulkan-intel git vi vim ukify -
Generate the fstab File:
# genfstab -U /mnt >> /mnt/etc/fstab -
Chroot into the New System:
# arch-chroot /mnt -
Configure the System:
Set the timezone:
# ln -sf /usr/share/zoneinfo/UTC /etc/localtime # hwclock --systohcUncomment
en_US.UTF-8 UTF-8in/etc/locale.genand generate the locale:# sed -i 's/#'"en_US.UTF-8"' UTF-8/'"en_US.UTF-8"' UTF-8/g' /etc/locale.gen # locale-gen # echo 'LANG=en_US.UTF-8' > /etc/locale.conf # echo "KEYMAP=us" > /etc/vconsole.confSet the hostname:
# echo myhostname > /etc/hostname # cat <<EOT >> /etc/hosts 127.0.0.1 myhostname ::1 localhost 127.0.1.1 myhostname.localdomain myhostname EOTConfigure
mkinitcpio.confto include theencrypthook:# sed -i 's/HOOKS.*/HOOKS=(base udev autodetect modconf kms \ keyboard keymap consolefont block encrypt filesystems resume fsck)/' \ /etc/mkinitcpio.confCreate the initial ramdisk:
# mkinitcpio -PInstall the bootloader:
# bootctl installSet the root password:
# passwdInstall microcode and efibootmgr:
# pacman -S intel-ucode efibootmgrGet the swap offset:
# swapoffset=`filefrag -v /swapfile | awk '/\s+0:/ {print $4}' | \ sed -e 's/\.\.$//'`Get the UUID of the encrypted partition:
# blkid -s UUID -o value /dev/nvme0n1p2Create the EFI boot entry. Replace
<UUID OF CRYPTDEVICE>with the actual UUID:# efibootmgr --disk /dev/nvme0n1p1 --part 1 --create --label "Linux" \ --loader /vmlinuz-linux --unicode "cryptdevice=UUID=<UUID OF CRYPTDEVICE>:root \ root=/dev/mapper/root resume=/dev/mapper/root resume_offset=$swapoffset \ rw initrd=\intel-ucode.img initrd=\initramfs-linux.img" --verboseConfigure the UKI presets:
# cat <<EOT >> /etc/mkinitcpio.d/linux.preset ALL_kver="/boot/vmlinuz-linux" ALL_microcode=(/boot/*-ucode.img) PRESETS=('default' 'fallback') default_uki="/efi/EFI/Linux/arch-linux.efi" default_options="--splash /usr/share/systemd/bootctl/splash-arch.bmp" fallback_uki="/efi/EFI/Linux/arch-linux-fallback.efi" fallback_options="-S autodetect" EOTCreate the UKI directory:
# mkdir -p /efi/EFI/LinuxConfigure the kernel command line:
# cat <<EOT >> /etc/kernel/cmdline cryptdevice=UUID=<UUID OF CRYPTDEVICE>:root root=/dev/mapper/root \ resume=/dev/mapper/root resume_offset=51347456 rw EOTBuild the UKIs:
# mkinitcpio -p linuxConfigure the kernel install layout:
# echo "layout=uki" >> /etc/kernel/install.conf -
Configure Networking (Optional):
Create a systemd-networkd network configuration file:
# cat <<EOT >> /etc/systemd/network/nic0.network [Match] Name=nic0 [Network] DHCP=yes EOT -
Install a Desktop Environment (Optional):
Install Xorg, Xfce, LightDM, and related packages:
# pacman -Syu # pacman -S xorg xfce4 xfce4-goodies lightdm lightdm-gtk-greeter \ libva-intel-driver mesa xorg-server xorg-xinit sudo # systemctl enable lightdm # systemctl start lightdm -
Enable Network Services (Optional):
# systemctl enable systemd-resolved.service # systemctl enable systemd-networkd.service # systemctl start systemd-resolved.service # systemctl start systemd-networkd.service -
Create a User Account:
Create a user account and add it to the
wheelgroup:# useradd -m -g wheel -s /bin/bash myusername -
Reboot:
Exit the chroot environment and reboot your system:
# exit # umount -R /mnt # reboot
# sgdisk /dev/nvme0n1 -n 1::+512MiB -t 1:EF00
# sgdisk /dev/nvme0n1 -n 2 -t 2:8300