Prerequisites: This article assumes you’re comfortable with Rust basics and have a general understanding of what an operating system does. You don’t need prior rust kernel development experience, I’ll explain the low-level concepts as we go.
I’ve always loved studying low-level systems, understanding exactly how and why things work the way they do. Over the last couple of months, I’ve started a deep dive into rust kernel development. Not just the standard theory, but how kernels really work under the hood. Studying from lectures is fine, but I find it hard to remember concepts just by listening. I need to build things to learn them. So, I decided to create a basic OS kernel to help me review what I’ve been studying and create something tangible. This hands-on approach to rust kernel development has been incredibly rewarding.
I chose the name Onko because it perfectly captures the spirit of this project: revitalizing old knowledge to understand the new. In a world of high-level abstractions, building an operating system is an exercise in going back to the fundamentals. I am essentially ‘warming up’ concepts that have existed for decades (interrupts, memory paging, and kernel structures) and bringing them to life with my own code. This project isn’t about competing with modern giants; it’s about mastering the history of the machine to become a better engineer today.
What You’ll Learn
By the end of this article, you’ll understand:
- How to set up a bare-metal Rust development environment
- What a linker script does and why kernels need custom ones
- How bootloaders hand control to your kernel
- How to write directly to video memory (framebuffer graphics)
- The complete build pipeline from Rust code to bootable ISO
What’s Next
This is part 1 of a series. In upcoming articles, we’ll add:
- Part 2: Interrupt handling and keyboard input
- Part 3: Memory management and paging
- Part 4: Basic process scheduling
- Part 5: System calls and user space
Environment Setup
For this we need to setup our environment. We will need the below tools:
rustupqemu(Used for virtualizing our OS)xorriso(Tool used for creating the ISO)gitmake
Let’s start by installing rustup in our system:
curl --proto '=https' --tlsv1.2 -sSf [https://sh.rustup.rs](https://sh.rustup.rs) | sh
# We then switch to nightly build:
rustup override set nightly
# Add the 'bare metal' target
rustup target add x86_64-unknown-none
# Add rust-src (needed to recompile core libraries)
rustup component add rust-src llvm-tools-preview
Next, we can install the other tools we need with the below commands (based on the OS you are using):
Arch Linux / Manjaro
sudo pacman -S qemu-full xorriso git make
Ubuntu / Debian
sudo apt update
sudo apt install qemu-system-x86 xorriso git make build-essential
macOs (Homebrew)
brew install qemu xorriso git make
We are now ready to create our project! Let’s start!
cargo new onko_os
Project Configuration
Cargo.toml
This will create our base project. As the first thing we have to modify our Cargo.toml file where we need to add our required dependency and a few other project settings. This is what the Cargo.toml file should look like:
[package]
name = "onko_os"
version = "0.1.0"
edition = "2024"
[dependencies]
limine = "0.5.0"
[profile.dev]
panic = "abort"
[profile.release]
panic = "abort"
The dependency we added is limine.
As stated in the project GitHub page: “Limine is a modern, advanced, portable, multiprotocol bootloader and boot manager, also used as the reference implementation for the Limine boot protocol.” This will allow us to focus on developing our OS and not also the bootloader(maybe this will be a future project to work on, who knows 😉).
The other settings (panic = "abort") are required for telling the compiler to disable “Stack Unwinding” and instead simply stop execution immediately when an error occurs. We have to set this because we do not have an OS underneath to do this stack unwinding.
Why no stack unwinding? In normal programs, when a panic occurs, Rust “unwinds” the stack, running destructors and cleaning up resources as it backs out of nested function calls. This requires runtime support from the operating system. Since we are the operating system, we have nothing to unwind to. We tell Rust to simply abort on panic instead.
.cargo/config.toml
Before we look at the linker script, we need to understand how we’re telling Cargo to build our kernel differently from a normal Rust program. This is where .cargo/config.toml comes in. Note that this file lives in a .cargo directory inside our project (this is Cargo’s convention for project-specific configuration).
[build]
target = "x86_64-unknown-none"
[target.x86_64-unknown-none]
rustflags = [
"-C", "link-arg=-Tlinker.ld",
"-C", "relocation-model=static"
]
Understanding the Target Triple
target = "x86_64-unknown-none"
This line tells Cargo: “By default, build for this target instead of the host system.” A target triple describes the platform we’re compiling for. Let’s decode what x86_64-unknown-none means:
x86_64: The CPU architecture—we’re building for 64-bit x86 processors (Intel and AMD)unknown: The vendor field, which is “unknown” because we’re not targeting a specific vendor’s platformnone: The operating system, or rather, the lack of one. There’s no OS underneath us
This target is built into Rust’s compiler. It tells rustc to compile in a “bare metal” mode with no OS assumptions. Compare this to a normal target like x86_64-unknown-linux-gnu, which assumes you’re running on Linux with the GNU C library available.
By setting this in config.toml, we don’t have to type cargo build --target x86_64-unknown-none every time. Cargo will automatically use this target for our project.
Rust Flags Configuration
[target.x86_64-unknown-none]
rustflags = [
"-C", "link-arg=-Tlinker.ld",
"-C", "relocation-model=static"
]
This section configures how Rust should compile specifically for the x86_64-unknown-none target. The rustflags array lets us pass additional arguments to the compiler.
"-C", "link-arg=-Tlinker.ld" : The -C flag passes a code generation option to rustc. In this case, we are using link-arg to pass an argument directly to the linker. The -Tlinker.ld tells the linker: “Use this custom linker script instead of the default one.”
Without this, the linker would use its built-in defaults, which assume you’re linking a normal userspace program. It would place your code at low memory addresses (like 0x400000) and expect things like a C runtime to be present. Our custom linker script (linker.ld) gives us complete control over where our kernel code and data get placed in memory.
"-C", "relocation-model=static" : This tells the compiler to use static linking and avoid position-independent code (PIC).
In normal programs, code is often position-independent, it can be loaded at any address in memory and still work correctly. This is useful for shared libraries and for security features like ASLR (Address Space Layout Randomization). Position-independent code achieves this by using relative addressing and a Global Offset Table (GOT).
But we’re writing a kernel. We don’t need position independence because we control exactly where our kernel is loaded (In the next Linker section we will see that we will load it at address 0xffffffff80000000 ) We want static code that assumes it’s loaded at a specific address. This generates slightly more efficient code and, more importantly, it’s simpler. We don’t need to set up a GOT or handle relocations, the linker can resolve all addresses at link time.
ℹ️ Why Not Just Use Command-Line Arguments?
Couldn’t we just run cargo build --target x86_64-unknown-none with some environment variables? Yes, but putting this in .cargo/config.toml has several advantages:
- Convenience: We just type
makeorcargo build, and it does the right thing - Consistency: Everyone who clones your repository gets the same build configuration
- IDE integration: Most Rust IDEs read this file and automatically use the correct target
- Less error-prone: We can’t accidentally forget to pass the right flags
Someone looking at your repository can immediately see that this is a bare-metal x86-64 project just by reading this file.
The Linker Script
We first to define where to store our data inside the memory. In normal application development, the linker uses default rules, because it assumes that we are running on top of an operating system. In this case we have to specify this. We accomplish this by creating a file called linker.ld.
Output Format and Architecture
OUTPUT_FORMAT(elf64-x86-64)
OUTPUT_ARCH(i386:x86-64)
First we have to specify that we want a 64-bit x86-64 ELF binary. (ELF is the standard executable format on Linux) Limine knows how to load ELF kernels, that’s why we are using this format.
Entry Point
ENTRY(kmain)
This tells the linker that kmain is where execution should begin. When Limine hands control to our kernel, it will jump to whatever address contains the kmain function (and this is why we will use #[no_mangle] on that function on main.rs, so compiler won’t change the function’s name).
Program Headers (Segments)
PHDRS
{
text PT_LOAD FLAGS((1 << 0) | (1 << 2)) ; /* R + X */
rodata PT_LOAD FLAGS((1 << 2)) ; /* R */
data PT_LOAD FLAGS((1 << 1) | (1 << 2)) ; /* R + W */
}
This section is defining program headers or segments. Segments are like containers that tell the bootloader (and later, our memory management system) what permissions different parts of our kernel need.
The flags use bit manipulation:
- Bit 0 (Value 1): Execute permission
- Bit 1 (Value 2): Write permission
- Bit 2 (Value 4): Read permission
So when we use (1 << 0) | (1 << 2) , we shift 1 left by 0 positions (= 1) and 1 left by 2 position ( = 4), then we OR them together to get 5 (Read + Execute).
We separate segments for security and correctness:
- Our code (
textsegment) should be readable and executable only—we don’t want bugs accidentally overwriting your instructions - Our constants (
rodata– read-only data) should never be modified - Our variables (
data) need to be read and written but shouldn’t be executable
Higher-Half Kernel
. = 0xffffffff80000000;
This is one of the most important lines in our entire linker script. The dot (.) represents the “location counter” (the current memory address where the linker is placing things). By setting it to 0xffffffff80000000, we are telling the linker to place our kernel in the higher half of the virtual address space.
In x86-64 architecture, the full 64-bit address spaces theoretically enormous, but practical implementations typically only use 48 bits (256TB). The address space is conventionally split:
- Lower half (
0x0000000000000000to0x00007fffffffffff) : User space applications - Higher half (
0xffff800000000000to0xffffffffffffffff) : Kernel space
┌─────────────────────────────────┐ 0xFFFFFFFFFFFFFFFF
│ │
│ Kernel Space │ ← Our kernel lives here
│ (Higher Half) │
│ │
├─────────────────────────────────┤ 0xFFFF800000000000
│ │
│ (Non-canonical addresses) │
│ │
├─────────────────────────────────┤ 0x00007FFFFFFFFFFF
│ │
│ User Space │ ← User programs run here
│ (Lower Half) │
│ │
└─────────────────────────────────┘ 0x0000000000000000
When we eventually implement virtual memory and user-space programs, this separation means user programs can’t accidentally access kernel memory. The kernel lives in a protected region of virtual memory.
ℹ️ Limine Memory Mapping
Limine will handle the virtual memory mapping for us. It sets up page tables that map this high virtual address to wherever our kernel is actually loaded in physical RAM. So while our code thinks it’s running at this high address, physically it might be loaded at 2MB or 16MB in RAM. We can explore this abstraction in the future.
Section Definitions
Text Section (Code)
.text : {
*(.text .text.*)
} :text
The .text section contains our executable code. Those patterns means “take all sections named .text or starting with .text. from all input object files and put them here. The :text at the end assigns this section to the text segment we defined earlier (Read + Execute permissions).
Alignment
. = ALIGN(0x1000);
This aligns the location counter to the next 4KB boundary. 4KB is the standard page size on x86-64 architecture.
ℹ️ Why align to page boundaries?
Pages are the fundamental unit of memory management. When we implement virtual memory, we’ll be mapping memory in 4KB chunks. By aligning our sections to page boundaries, we make it easier to apply different permissions to different sections. We can make the text pages read-only and executable, while data pages are read-write but not executable.
Read-Only Data Section
.rodata : {
*(.rodata .rodata.*)
} :rodata
The .rodata section (read-only data) contains constants and string literals. In our code, if we write something like let MESSAGE: &str = "Hello World";, that string would end up here. It needs to be readable but should never be modified.
Data Section
.data : {
*(.data .data.*)
} :data
The .data section contains initialized global variables (mutable data that has initial values).
BSS Section
.bss : {
*(.bss .bss.*)
*(COMMON)
} :data
The .bss section (Block Started by Symbol) contains uninitialized global variables. Instead of storing zeros in our binary file for every uninitialized variable, the linker just records how much space they need. When our kernel loads, this region is zeroed out automatically. This saves significant space in our kernel binary.
The *(COMMON) directive catches tentative definitions (variables that might be defined in multiple object files). The .bss section is assigned to the :data segment because it needs the same read-write permissions as .data, just without taking up space in the binary file itself.
Discarded Sections
/DISCARD/ : {
*(.eh_frame)
*(.note .note.*)
}
This tells the linker to throw away certain sections entirely:
.eh_framecontains exception handling information for C++ (we don’t need it because we’re writing a kernel withpanic = abort).notesections contain various metadata that the kernel doesn’t need at runtime
Removing these shrinks our kernel binary.
The Complete Linking Process
When we run cargo build, here’s what happens:
- Rust compiles our code into object files containing
.text,.rodata,.data, and.bsssections - The linker reads our
linker.ldscript - It creates program headers with the permissions we specified
- It places all code at
0xffffffff80000000(higher half) - It arranges sections in order: text, rodata, data, bss (each aligned to 4KB boundaries)
- It produces a single ELF file named
kernel.elf - Limine loads the ELF, sets up virtual memory to map the higher-half addresses to physical RAM, and jumps to
kmain
Kernel Code (main.rs)
Now that we understand how the linker arranges our kernel in memory, let’s look at what our kernel actually does.
No Standard Library
#![no_std]
#![no_main]
When we write a normal Rust program, we get the standard library (std) for free. This library gives us things like Vec, String, println!, file I/O, networking and threading. The catch is that the standard library assumes we are running on top of an operating system.
But in our case, we are the operating system. So #![no_std] tells Rust: “Don’t link the standard library, because we have nowhere to run it.”
Similarly, #![no_main] tells Rust to not use its standard main() function entry point. Normally, Rust programs start in a runtime that sets up panic handling, parses command-line arguments, and then calls our main() function. But our kernel doesn’t start that way. Limine is going to jump directly to wherever we tell it to jump.
What we still have: Rust still gives us core, a subset of the standard library that works without an OS. It includes things like Option, Result, iterators, and basic types. Just no heap allocation, no I/O, and no threading. We’ll have to build those ourselves eventually.
Limine Protocol Handshake
#[used]
static BASE_REVISION: BaseRevision = BaseRevision::new();
When Limine loads our kernel, it needs to know what version of the Limine protocol we’re speaking. Think of it like two people meeting for the first time, they need to establish which language they’ll communicate in before they can have a meaningful conversation.
The BaseRevision is our way of telling Limine that we are a modern kernel and we understand the latest Limine protocol. Otherwise Limine might treat our kernel as a legacy bootloader protocol and ignore our requests entirely.
The #[used] attribute is crucial here. Normally, if the compiler sees a static variable that you never read from, it optimizes it away to save space. But we need this variable to exist in the compiled binary so that Limine can find it. This attribute prevents the variable from being removed even if it looks unused.
Framebuffer Request
#[used]
static FRAMEBUFFER_REQUEST: FramebufferRequest = FramebufferRequest::new();
Here’s where we start asking Limine for favors. A framebuffer is essentially a chunk of memory that represents the screen. Each pixel on our monitor corresponds to a few bytes in this memory. If we write color values to the right locations, those pixels light up on the screen.
Limine sets up the graphics mode, allocates the framebuffer, and if we politely ask, it will tell us where it is. This is the pattern we’ll use throughout kernel development: the bootloader handles the messy hardware initialization for us, and we just request information about what it set up.
Later, when we are more advanced, we might write our own graphics driver that talks directly to the GPU. But for now we’ll stick to Limine’s framebuffer.
Panic Handler
#[panic_handler]
fn panic(_info: &PanicInfo) -> !{
loop {
unsafe { core::arch::asm!("hlt"); }
}
}
In normal Rust programs, when something goes catastrophically wrong, Rust calls a panic handler. The default panic handler prints an error message and terminates the program. The operating system cleans up and life goes on.
But there’s a small problem… We are the operating system! There’s nowhere to return to. If our kernel panics, the entire computer panics with us!
The #[panic_handler] attribute tells Rust: “If something goes horribly wrong, call this function.” Right now, our panic handler is extremely simple: it just halts the CPU in an infinite loop.
The hlt instruction tells the processor to stop executing instructions and wait for an interrupt. This is more power-efficient than an empty loop that spins forever.
The ! return type is Rust’s way of saying “this function never returns.” If the kernel panics, the only way forward is to restart the computer.
Coming in Part 2: We’ll improve our kernel panic handler to display error messages on screen, so we can actually debug what went wrong!
The Kernel Entry Point
#[no_mangle]
pub extern "C" fn kmain() -> ! {
This is where everything begins. When Limine finishes its work, it jumps to this function, as we specified in the linker.ld file.
#[no_mangle]: As I explained in the linker section, this is required because the linker needs to find a function literally named kmain. This attribute prevents Rust from mangling the name into something else (like _ZN8onko_os5kmain17h1234567890abcdefE).
pub extern "C": This tells Rust to use the C calling convention for this function. Different languages have different rules about how function arguments are passed and how the stack is managed. We ensure that when Limine jumps to this function, everything is set up the way it expects. This is the universal calling convention for low-level system code.
-> !: Again, this means this function never returns. A kernel’s entry point runs forever, because there’s no one to give control back to.
Getting the Framebuffer
if let Some(response) = FRAMEBUFFER_REQUEST.get_response() {
if let Some(framebuffer) = response.framebuffers().next() {
We check if Limine fulfilled our FRAMEBUFFER_REQUEST we created earlier. The get_response() method returns Option<Response>. If Limine didn’t provide a framebuffer, this returns None. We use if let to handle this gracefully—if there is no framebuffer, we just skip the drawing code.
The framebuffers().next() part handles the fact that some systems have multiple monitors. Limine gives us an iterator over all available framebuffers. For now we just grab the first one with .next(), which again returns an Option.
Understanding Framebuffer Properties
let width = framebuffer.width();
let height = framebuffer.height();
let pitch = framebuffer.pitch();
let bpp = framebuffer.bpp();
These four values tell us everything we need to know about the framebuffer’s structure:
Width and height are straightforward: They tell us the resolution in pixels.
Bits per pixel (bpp): Tells us how much data represents each pixel. Typically this is 32 bits (4 bytes), which gives us 8 bits each for red, green, blue, and alpha (transparency).
Pitch is trickier. You might think pitch would just be width * bytes_per_pixel, but it’s not always. Pitch is the number of bytes in one row of pixels, and it might be larger than width * bytes_per_pixel due to alignment requirements. Graphics hardware sometimes wants each row to start at a specific byte boundary for performance reasons. Always use pitch when calculating row offsets!
let bytes_per_pixel = (bpp as u64) / 8;
We convert bits per pixel to bytes per pixel (if we have 32 bits per pixel, that’s 4 bytes per pixel). We cast to u64 because we will be doing address calculations later, and we want to avoid any potential overflow issues. In kernel development, it’s better to be explicit about types than to let the compiler make assumptions.
Raw Framebuffer Pointer
let buffer_ptr = framebuffer.addr()
This is the raw pointer to the beginning of video memory. It’s not behind any safety abstractions, any permission checks, any protection mechanisms. It’s just a memory address.
Why unsafe is unavoidable here: In kernel development, we must use unsafe for hardware interaction. We’re writing directly to memory-mapped hardware. There’s no way around it. The key is to keep our unsafe blocks small and well-documented, so we know exactly what invariants we’re maintaining.
If we write to the wrong offset, we could corrupt memory. If we write past the end of the framebuffer, we might crash the system. As we’re developing a kernel, there’s no safety net.
Defining Our Color
let r: u8 = 12;
let g: u8 = 24;
let b: u8 = 33;
let color = (r as u32) << 16 | (g as u32) << 8 | (b as u32);
Now we define our background color. I chose a dark blue-gray color (RGB: 12, 24, 33), but you can pick any color you like.
The magic happens in how we encode this color into a 32-bit integer. Most graphics systems use the BGRA or RGBA format for 32-bit color, with one byte for each channel.
The typical layout for 32 bpp is:
Byte 3 Byte 2 Byte 1 Byte 0
[Alpha] [Red] [Green] [Blue]
So we shift red left by 16 bits (moving it to byte 2), green left by 8 bits (moving it to byte 1), and leave blue at byte 0. The | operator combines them with bitwise OR. We leave alpha at 0, which usually means fully opaque.
Example with actual values:
Red = 12 = 0x0C → shift left 16 bits → 0x000C0000
Green = 24 = 0x18 → shift left 8 bits → 0x00001800
Blue = 33 = 0x21 → no shift → 0x00000021
OR together
─────────────
Final color = 0x000C1821 (or 793,633 in decimal)
ℹ️ Color byte order
Some systems might use a different byte order. If your colors look wrong (red and blue swapped), you might need to adjust the bit shifts to:
let color = (b as u32) << 16 | (g as u32) << 8 | (r as u32);
Drawing to the Screen
for y in 0..height {
for x in 0..width {
let pixel_offset = y * pitch + x * bytes_per_pixel;
unsafe {
let pixel_addr = buffer_ptr.add(pixel_offset as usize);
*(pixel_addr as *mut u32) = color;
}
}
}
Here’s where we actually fill the screen with color. We loop through every row (y) and every column (x), calculate where that pixel lives in memory, and write our color value there.
The pixel offset formula is crucial: y * pitch + x * bytes_per_pixel
Why pitch instead of width? Because pitch accounts for any extra padding at the end of each row. We multiply y by pitch to skip over all the previous rows, then add x * bytes_per_pixel to move across the current row.
This code is wrapped in unsafe because we are doing raw pointer arithmetic and dereferencing pointers. Rust can’t verify that we’re not writing past the end of the framebuffer or to an invalid address. We have to be absolutely sure our math is correct.
The danger: If y or x went out of bounds, we could overwrite arbitrary memory. In a user-space program, the OS would kill your process. In kernel-space, we’d crash the entire system.
We use .add(pixel_offset as usize) to calculate the address of the specific pixel. This does pointer arithmetic, adding the offset to our base pointer. Then we cast it to *mut u32 (a mutable pointer to a 32-bit unsigned integer) and dereference it with * to write our color value. (u32 because we’re writing 4 bytes at once, RGBA color).
The Idle Loop
loop {
unsafe { core::arch::asm!("hlt"); }
}
Finally, we enter the kernel’s idle loop. An operating system never “finishes” its work and exits. It runs forever, waiting for something to happen (a keyboard press, a mouse movement, a network packet arriving).
Right now, we don’t have any way to handle those events, so we just halt the CPU and wait.
We put the hlt instruction inside a loop even though it doesn’t return for now because some systems have non-maskable interrupts (NMIs) that can wake the CPU even if we haven’t explicitly enabled interrupts. This is always good practice.
Bootloader Configuration (limine.conf)
Before we can run our kernel, we need to tell Limine how to boot it. This is where limine.conf comes in. It’s Limine’s configuration file that specifies what to boot and how.
timeout: 3
/Onko OS
protocol: limine
kernel_path: boot():/kernel.elf
Let’s break this down:
timeout: 3 tells Limine to wait 3 seconds before automatically booting. If you had multiple boot entries (say, different kernels or different configurations), Limine would display a menu and wait 3 seconds for you to choose. Since we only have one entry, it just shows a brief splash screen before loading our kernel. You can set this to 0 if you want instant booting, or increase it if you want more time to see what’s happening.
/Onko OS is the name of our boot entry. The forward slash at the beginning is part of Limine’s syntax for defining an entry. This is what you’d see in the boot menu if you had multiple options. You can name this whatever you want.
protocol: limine tells Limine which boot protocol to use. Limine supports multiple protocols (including the older Stivale2 protocol), but we’re using the modern Limine protocol. This is what allows us to use those request/response structures in our Rust code to ask for the framebuffer.
kernel_path: boot():/kernel.elf specifies where our kernel binary is located. The boot(): part is Limine’s syntax meaning “the partition where this config file is located.” So it’s saying: “Look in the same place you found this config file, and load the file named kernel.elf.” This path corresponds to where our Makefile copies the kernel binary when building the ISO image.
That’s it for the configuration. Limine doesn’t need much, just tell it what to boot and it handles the rest. It sets up the CPU in 64-bit long mode, establishes basic page tables, sets up the framebuffer, and jumps to our kernel. All the messy hardware initialization handled for us.
Build System (Makefile)
Now let’s look at how we actually build and run our kernel. The Makefile manages the entire process, from compiling Rust code to creating a bootable ISO image.
KERNEL := target/x86_64-unknown-none/debug/onko_os
ISO := image.iso
.PHONY: all run clean kernel limine
all: $(ISO)
First, we define some variables. KERNEL points to where Cargo outputs our compiled kernel binary. Notice that path includes x86_64-unknown-none/debug. This matches the target we specified in .cargo/config.toml. The ISO variable is simply the name of our final bootable ISO image.
The .PHONY line tells Make that these aren’t actual files, they’re just commands. Without this, if you happened to have a file named “clean” in your directory, make clean wouldn’t work properly.
Step 1: Building the Kernel
kernel:
cargo build
This target simply runs cargo build. Cargo reads our .cargo/config.toml to know it should build for the x86_64-unknown-none target, and it uses our custom linker script (linker.ld) thanks to the rustflag we set. The output is a standalone ELF binary at the path specified in our KERNEL variable.
You could run make kernel by itself if you just want to compile without creating an ISO. This is useful when we’re iterating quickly and just want to check for compiler errors.
Step 2: Getting Limine
limine:
@if [ ! -d limine ]; then \
git clone https://github.com/limine-bootloader/limine.git --branch=v10.x-binary --depth=1; \
fi
make -C limine
This target ensures we have the Limine bootloader tools available.
The @if [ ! -d limine ]; then checks if a directory named “limine” exists. The @ at the beginning suppresses echoing the command to the terminal (purely cosmetic). If the directory doesn’t exist, we clone the Limine repository from GitHub.
--branch=v10.x-binary specifies which version of Limine to use. We’re using the v10.x branch, which includes pre-compiled binaries. This saves us from having to compile Limine from source, which whould require additional dependencies.
--depth=1 is an optimization that tells git to only download the latest commit, not the entire history. This makes the clone much faster and uses less disk space.
After cloning (or if the directory already exist), we run make -C limine . The -C flag means “change to this directory before running make”. This builds the Limine bootloader tools, specifically the limine executable that we’ll use to make our ISO bootable.
Step 3: Creating the Bootable ISO
$(ISO): kernel limine
rm -rf iso_root
mkdir -p iso_root
cp $(KERNEL) iso_root/kernel.elf
cp limine.conf limine/limine-bios.sys limine/limine-bios-cd.bin limine/limine-uefi-cd.bin iso_root/
xorriso -as mkisofs -b limine-bios-cd.bin \
-no-emul-boot -boot-load-size 4 -boot-info-table \
--efi-boot limine-uefi-cd.bin \
-efi-boot-part --efi-boot-image --protective-msdos-label \
iso_root -o $(ISO)
./limine/limine bios-install $(ISO)
This is where everything comes together. The target depends on both kernel and limine, so Make will ensure those are built first before running these commands.
rm -rf iso_root and mkdir -p iso_root give us a clean working directory. We’re creating a temporary folder structure that will become the contents of our ISO image.
cp $(KERNEL) iso_root/kernel.elf copies our compiled kernel into this staging directory. Note that we rename it to kernel.elf—this matches the kernel_path we specified in limine.conf.
cp limine.conf limine/limine-bios.sys... copies all the files we need for booting:
limine.conf: Our bootloader configurationlimine-bios.sys: The Limine bootloader for BIOS systemslimine-bios-cd.bin: Boot code for BIOS CD/DVD bootinglimine-uefi-cd.bin: Boot code for UEFI systems
By including both BIOS and UEFI boot files, our ISO can boot on both legacy BIOS systems and modern UEFI systems. Maximum compatibility.
The xorriso command is where we actually create the ISO image. xorriso is a tool for creating ISO 9660 filesystem images (the standard format for CDs and bootable disk images). Let’s decode those flags:
-as mkisofs: Run xorriso in mkisofs compatibility mode (mkisofs is an older tool many people are familiar with)-b limine-bios-cd.bin: Use this file as the boot image for BIOS systems-no-emul-boot: Don’t emulate a floppy or hard disk; boot directly-boot-load-size 4: Load 4 sectors (2KB) of the boot image-boot-info-table: Create a table with boot information for the bootloader--efi-boot limine-uefi-cd.bin: Use this file for UEFI booting-efi-boot-part: Mark this as an EFI boot partition--efi-boot-image: Create an EFI boot image--protective-msdos-label: Add a protective MBR for compatibilityiso_root -o $(ISO): Create the ISO from theiso_rootdirectory and output toimage.iso
After creating the ISO, we have one more critical step:
./limine/limine bios-install $(ISO) runs Limine’s installation tool on our ISO. This writes boot code into the ISO’s boot sectors so that BIOS systems know how to start the Limine bootloader. Without this step, BIOS systems wouldn’t recognize the ISO as bootable.
Step 4: Running in QEMU
run: $(ISO)
qemu-system-x86_64 -cdrom $(ISO)
This is the payoff. The run target depends on $(ISO), so running make run will automatically build everything if needed, then launch QEMU with our ISO image as a virtual CD-ROM
QEMU is an emulator that can simulate an entire x86-64 computer. When you run this command, QEMU creates a virtual machine, inserts our ISO as if it were a CD in a CD drive, and boots from it. Within seconds, you’ll see a window open with our kernel running, displaying that dark blue-gray screen we programmed.
You can pass additional flags to QEMU if you want. For example:
-m 512M: Allocate 512MB of RAM to the virtual machine (default is 128MB)-enable-kvm: Use hardware virtualization on Linux for better performance-serial stdio: Redirect serial port output to your terminal (useful when you implement serial logging)
Cleaning Up
clean:
cargo clean
rm -rf iso_root $(ISO) limine
The clean target removes all build artifacts. cargo clean deletes the target/ directory where Cargo stores compiled files. Then we remove our temporary iso_root directory, the final ISO image, and even the entire Limine directory. Running make clean followed by make all gives us a completely fresh build from scratch, useful when troubleshooting strange issues.
Building and Running the Kernel
Now we have everything we need to build and run our kernel. We can run it by:
make clean
make all
make run
If we made any modification in the code we only have to run make run to rebuild and run the kernel.
Conclusion & Next Steps
Congratulations! If you’ve followed along, you now have a working kernel that:
- Boots on both BIOS and UEFI systems
- Runs in 64-bit long mode in the higher half of memory
- Communicates with the Limine bootloader using a modern protocol
- Writes directly to video memory to display graphics
- Compiles from Rust to a bootable ISO with a single
make runcommand
This might seem simple (just a colored screen) but you’ve actually accomplished something significant. You’ve crossed the boundary from high-level programming into the world of bare metal. You understand how bootloaders work, how memory is organized, how the linker arranges your code, and how to write to hardware directly.
What You’ve Learned
Beyond the technical skills, you’ve learned the mindset of systems programming:
- Nothing is magic: Every abstraction has a concrete implementation underneath
- Safety is a luxury: In kernel space, we have to be our own safety net
- Details matter: A single misplaced byte can crash the entire system
- Tools are your friends: Understanding your build pipeline makes debugging easier
Common Issues & Troubleshooting
Before moving forward, here are some issues you might encounter and how to solve them:
QEMU Won’t Start
Problem: qemu-system-x86_64: command not found
- Solution: Make sure you installed QEMU correctly for your platform (see the Environment Setup section)
- On Linux, you might need
qemu-system-x86instead ofqemu-full
Black Screen
Problem: QEMU opens but shows only a black screen
- Solution: Check that your kernel actually compiled. Look for
target/x86_64-unknown-none/debug/onko_os - Make sure you ran
make allnot justmake kernel - Try adding
-serial stdioto the QEMU command to see if there are any boot messages
Wrong Colors
Problem: The screen displays but colors are wrong (red and blue swapped)
- Solution: Your system uses BGR instead of RGB byte order. Change the color calculation to:
let color = (b as u32) << 16 | (g as u32) << 8 | (r as u32);
Kernel Panic On Boot
Problem: System crashes immediately or QEMU exits
- Solution: Check your linker script syntax carefully, missing semicolons or braces will cause link errors
- Make sure
#[no_mangle]is on yourkmainfunction - Verify your
.cargo/config.tomlhas the correct target and flags
Build Errors
Problem: Rust compiler errors about missing core or target not found
- Solution: Make sure you installed the necessary Rust components:
rustup override set nightly
rustup target add x86_64-unknown-none
rustup component add rust-src llvm-tools-preview
ISO Won’t Boot on Real Hardware
Problem: Works in QEMU but not on a real computer
- Solution: Make sure you ran
./limine/limine bios-install $(ISO)after creating the ISO - Try burning the ISO to a USB stick with a tool like
ddor Rufus - Some systems need Secure Boot disabled in BIOS/UEFI settings
Learning Resources
If you want to dive deeper while waiting for Part 2, here are some excellent resources:
OS Development:
- OSDev Wiki – The bible of OS development
- Writing an OS in Rust by Philipp Oppermann – Excellent blog series
- The little book about OS development – Great practical guide
x86-64 Architecture:
- Intel 64 and IA-32 Architectures Software Developer Manuals – The official reference
- AMD64 Architecture Programmer’s Manual – AMD’s version
Rust Bare Metal:
- The Embedded Rust Book – Many concepts apply to kernel development
- Rust Embedded Working Group – Useful crates and tools
Where We Are Going
Now that we have a booting kernel, the real work begins. My goal for this series is to explore the following concepts, likely in this order:
Part 2: Interrupt Handling & Keyboard Input
- Setting up the Interrupt Descriptor Table (IDT)
- Handling hardware interrupts
- Reading keyboard input
- Displaying characters on screen
Part 3: Memory Management
- Understanding paging and virtual memory
- Implementing a frame allocator
- Setting up our own page tables
- Heap allocation with a basic allocator
Part 4: Process Scheduling
- Creating the concept of a process
- Implementing a simple round-robin scheduler
- Context switching between processes
- Basic multitasking
Part 5: System Calls & User Space
- Transitioning between kernel and user mode
- Implementing system call interface
- Loading and running user programs
- Process isolation and protection
Join the Journey
Building an OS is challenging, but that’s what makes it rewarding. Every time you add a new feature and see it work, you’ll feel that spark of understanding how computers really work from the ground up.
I’ll be documenting my progress on this blog as I go. If you’re following along, I’d love to hear about your experience! Feel free to:
- Leave comments below with questions or issues
- Share what you’ve built or modified
- Suggest features you’d like to see in future parts
Remember: every expert was once a beginner. Every operating system started with a colored screen. The difference between learning and mastering is just persistence and practice.
Happy kernel hacking! 🦀🖥️
📃 Build Commands Quick Reference
# First time setup
cargo new onko_os
cd onko_os
# Add dependencies and configuration files as described above
# Clean build
make clean
make all
# Quick rebuild and run (after code changes)
make run
# Just compile (no ISO creation)
make kernel
# Run with extra QEMU options
qemu-system-x86_64 -cdrom image.iso -m 512M -serial stdio
Next article: Part 2 – Interrupt Handling & Keyboard Input
Repository: The full code for this project will be available on GitHub.
Acknowledgments
Special thanks to:
- The Limine bootloader team for creating an excellent modern bootloader.
- The OSDev community for their comprehensive wiki and helpful forums.
- The Rust community for building such a powerful language for systems programming.
Support Me on Ko-fi