zudo-tauri

Type to search...

to open search from anywhere

Process Lifecycle

CreatedMar 29, 2026UpdatedMar 29, 2026Takeshi Takatsudo

Port cleanup, signal handling, process groups, and clean shutdown for Tauri v2 apps

Process Lifecycle

Managing process lifecycles is one of the most error-prone aspects of Tauri wrapper apps. If you get this wrong, you end up with zombie processes, occupied ports, and apps that will not launch a second time.

This page covers the full lifecycle: port cleanup before launch, sidecar stdout/stderr handling, process group setup for clean shutdown, the macOS window-close-must-exit pattern, and the graceful kill sequence.

Kill Stale Port Before Spawn

Before spawning a new sidecar, you must ensure the port is free. A previous instance of the app may have crashed and left a process listening on the port:

fn kill_port() {
    if let Ok(output) = Command::new("/usr/bin/lsof")
        .args(["-ti", &format!(":{PORT}")])
        .output()
    {
        let pids = String::from_utf8_lossy(&output.stdout);
        for line in pids.trim().lines() {
            if let Ok(pid) = line.trim().parse::<i32>() {
                log(&format!(
                    "kill_port: killing stale pid {pid} on port {PORT}"
                ));
                // SAFETY: pid is a valid process ID obtained from lsof
                unsafe { libc::kill(pid, libc::SIGTERM) };
            }
        }
        if !pids.trim().is_empty() {
            thread::sleep(Duration::from_millis(500));
        }
    }
}

Key details:

  • Uses /usr/bin/lsof with absolute path (works in both dev and production)
  • -ti gives terse output (just PIDs, no headers) for the given port
  • Sends SIGTERM (graceful) rather than SIGKILL (forced)
  • Waits 500ms for processes to actually terminate
  • Called before every spawn_sidecar()

⚠️ Warning

Always call kill_port() before spawning the sidecar. If you skip this step and a stale process is holding the port, your new sidecar will either fail to bind or bind to a different port, and your app will never connect to it.

When to Call kill_port()

fn main() {
    let sidecar: Option<Sidecar> = if IS_DEV {
        None
    } else {
        kill_port();  // Clean up before first spawn
        Some(spawn_sidecar(&pnpm_path))
    };
}

fn do_refresh(app_handle: &AppHandle) {
    // On refresh: kill old sidecar, then clean the port, then spawn new
    if let Some(mut old) = guard.take() {
        kill_sidecar(&mut old);
    }
    kill_port();  // Clean up before re-spawn
    *guard = Some(spawn_sidecar(&pnpm_path));
}

Sidecar stdout/stderr Redirection

Sidecar output must go somewhere. Letting it inherit the parent’s stdout/stderr works in dev mode (you see it in the terminal) but is useless in production (there is no terminal). Redirect to a log file:

fn spawn_sidecar(pnpm_path: &std::path::Path) -> Sidecar {
    let sidecar_log_path = app_dir().join(".tauri-sidecar-log");

    let log_file = fs::OpenOptions::new()
        .create(true)
        .write(true)
        .truncate(true)  // Fresh log each launch
        .open(&sidecar_log_path)
        .unwrap_or_else(|e| {
            panic!("Failed to open sidecar log at {}: {e}",
                sidecar_log_path.display());
        });
    let log_file_clone = log_file
        .try_clone()
        .expect("Failed to clone sidecar log file handle");

    let mut cmd = Command::new(pnpm_path);
    cmd.args(["dev"])
        .current_dir(&target_dir)
        .stdout(Stdio::from(log_file))       // stdout -> log file
        .stderr(Stdio::from(log_file_clone)); // stderr -> same log file

    // ...spawn
}

📝 Note

The try_clone() call is necessary because Stdio::from() takes ownership of the file handle. You need two separate handles — one for stdout and one for stderr — even though they point to the same file.

Log File Strategy

The pattern used here is:

  • Truncate on each launch (write(true).truncate(true)) — the log file only contains output from the current session
  • App-scoped path (.tauri-sidecar-log in the app directory) — easy to find for debugging
  • Separate from app log — the app’s own log (.tauri-log) tracks lifecycle events; the sidecar log captures raw stdout/stderr

Process Group for Clean Shutdown

This is one of the most important patterns. When you spawn a sidecar like pnpm dev, it spawns its own child processes (Vite, esbuild, etc.). If you only kill the pnpm process, its children become orphans and keep holding the port.

The solution is to spawn the sidecar in its own process group:

let mut cmd = Command::new(pnpm_path);
cmd.args(["dev"])
    .current_dir(&dir)
    .stdout(Stdio::from(log_file))
    .stderr(Stdio::from(log_file_clone));

#[cfg(unix)]
{
    use std::os::unix::process::CommandExt;
    cmd.process_group(0);  // New process group, PGID = child PID
}

let child = cmd.spawn().expect("Failed to spawn sidecar");
let pid = child.id();

process_group(0) tells the OS to create a new process group with the child’s PID as the group ID. All processes spawned by this child (and their children) inherit this group ID.

Why This Matters

Without process_group(0):

Tauri App (PID 100)
  └── pnpm (PID 200)     <-- You can kill this
        └── vite (PID 300)  <-- This becomes an orphan!
              └── esbuild (PID 400)  <-- This too!

With process_group(0):

Tauri App (PID 100)
  └── [Process Group PGID=200]
        ├── pnpm (PID 200)
        ├── vite (PID 300)
        └── esbuild (PID 400)

kill(-200, SIGTERM) → kills ALL of them

macOS Window Close Must Exit

On macOS, closing the last window does not terminate the application by default — the process stays alive in the Dock. For wrapper apps, this is wrong: if the window is closed, the sidecar should be killed and the app should exit.

.build(tauri::generate_context!())
.expect("error while building tauri application")
.run(move |app_handle, event| match &event {
    tauri::RunEvent::WindowEvent {
        event: tauri::WindowEvent::Destroyed,
        ..
    } => {
        // Kill sidecar on window close
        if !IS_DEV {
            if let Ok(mut g) = sidecar_for_exit.lock() {
                if let Some(mut s) = g.take() {
                    kill_sidecar(&mut s);
                }
            }
        }
        // Force app exit
        app_handle.exit(0);
    }
    _ => {}
});

⚠️ Warning

If you forget app_handle.exit(0), the Rust process will keep running after the window is closed. The sidecar will also keep running (if you forgot to kill it). The user will see a Dock icon with no window and wonder why their port is still occupied.

The sidecar_for_exit Pattern

The sidecar state must be accessible from the .run() closure, which is separate from the .setup() closure. The pattern is to clone the Arc<Mutex<>> before building the app:

let app_state = AppState {
    sidecar: Arc::new(Mutex::new(sidecar)),
    pnpm_path: found_pnpm,
    zoom: Mutex::new(1.0),
};

// Clone the Arc before moving app_state into .manage()
let sidecar_for_exit = app_state.sidecar.clone();

tauri::Builder::default()
    .manage(app_state)  // app_state moved here
    .setup(|app| { /* ... */ })
    .build(tauri::generate_context!())
    .run(move |app_handle, event| {
        // sidecar_for_exit is accessible here
        // ...
    });

Sidecar Kill Sequence

The kill sequence is a two-phase approach: try graceful shutdown first, then force-kill if necessary.

fn kill_sidecar(sidecar: &mut Sidecar) {
    log(&format!("kill_sidecar: pid={}", sidecar.pid));

    // Phase 1: SIGTERM the entire process group
    #[cfg(unix)]
    {
        if let Ok(pid) = i32::try_from(sidecar.pid) {
            if pid > 0 {
                // Negative PID signals the entire process group
                // SAFETY: pid is a valid child process ID
                unsafe { libc::kill(-pid, libc::SIGTERM) };
            }
        }
    }

    // Wait for graceful shutdown
    thread::sleep(Duration::from_millis(500));

    // Phase 2: Check if it exited, force-kill if not
    match sidecar.child.try_wait() {
        Ok(Some(_)) => {
            log("kill_sidecar: process already exited");
        }
        _ => {
            log("kill_sidecar: escalating to SIGKILL");
            let _ = sidecar.child.kill();  // SIGKILL
            let _ = sidecar.child.wait();  // Reap zombie
        }
    }
}

The Two Phases Explained

sequenceDiagram participant App as Tauri App participant PG as Process Group participant Child as Child Process App->>PG: kill(-pid, SIGTERM) Note over PG: All processes in group<br/>receive SIGTERM App->>App: sleep(500ms) App->>Child: try_wait() alt Process exited Child-->>App: Ok(Some(status)) Note over App: Clean exit, done else Process still running Child-->>App: Ok(None) or Err App->>Child: kill() [SIGKILL] App->>Child: wait() [reap zombie] Note over App: Forced exit, done end

Key points:

  1. SIGTERM to -pid (negative) targets the entire process group, not just the top-level process
  2. 500ms wait gives processes time to clean up (flush buffers, close connections)
  3. try_wait() checks if the process exited without blocking
  4. kill() then wait()kill() sends SIGKILL, and wait() reaps the zombie process to prevent resource leaks

💡 Tip

The wait() call after kill() is essential. Without it, the killed process becomes a zombie — it no longer runs, but its entry stays in the process table until the parent reaps it.

Complete Lifecycle Sequence

Here is the full lifecycle from app launch to app exit:

sequenceDiagram participant User participant Rust as Rust Main participant Port as Port (lsof) participant Sidecar participant Window as WebView User->>Rust: Launch app Note over Rust: Production mode only Rust->>Port: kill_port() - check for stale processes Port-->>Rust: PIDs on port (if any) Rust->>Port: SIGTERM stale PIDs Rust->>Sidecar: spawn_sidecar() Note over Sidecar: process_group(0)<br/>stdout/stderr -> log file Rust->>Window: Create with loading page Window-->>User: Loading spinner visible Rust->>Rust: Background thread polls Sidecar-->>Rust: Server ready (HTTP 200) Rust->>Window: navigate(server_url) Window-->>User: Real content Note over User: User closes window User->>Window: Close Window->>Rust: WindowEvent::Destroyed Rust->>Sidecar: kill(-pid, SIGTERM) Rust->>Rust: sleep(500ms) Rust->>Sidecar: try_wait / kill / wait Rust->>Rust: app_handle.exit(0)

Logging

Both the app’s own lifecycle events and the sidecar’s output should be logged to separate files:

fn log(msg: &str) {
    use std::io::Write;
    let path = app_dir().join(".tauri-log");
    if let Ok(mut f) = fs::OpenOptions::new()
        .create(true)
        .append(true)  // Append, don't truncate
        .open(&path)
    {
        let secs = SystemTime::now()
            .duration_since(UNIX_EPOCH)
            .unwrap_or_default()
            .as_secs();
        let _ = writeln!(f, "[{secs}] {msg}");
    }
}

This gives you two log files for debugging:

FileContents
.tauri-logApp lifecycle events (spawn, kill, ready, timeout)
.tauri-sidecar-logRaw sidecar stdout/stderr

The app log uses append(true) so it accumulates across launches (useful for debugging intermittent issues). The sidecar log uses truncate(true) so it only shows the current session’s output.