Compare commits

...

10 commits

Author SHA1 Message Date
30a1ae28f7 Refactor shared configuration and update LNBits service for improved domain handling
Updated shared.nix to enhance domain parameter propagation and modified configuration.nix to utilize the inherited domain for machine-specific setups. Adjusted example-service.nix to accept the domain as an argument, improving modularity. Additionally, added a new documentation file explaining the LNBits flake deployment process, detailing architecture, key components, and deployment instructions for better onboarding and understanding of the system.
2025-10-12 08:52:56 +02:00
ca5b78b561 Add build-local.nix for machine-specific web-app builds and update deployment instructions
Introduced a new example-build-local.nix file to facilitate machine-specific web-app builds, enhancing the deployment process. Updated the .gitignore to include build-local.nix, ensuring user-specific configurations remain untracked. Revised the DEPLOYMENT-GUIDE.md to reflect the addition of build-local.nix and provide clearer instructions for setting up configuration files, improving the onboarding experience for new users.
2025-10-12 08:34:58 +02:00
2229717860 Add initial deployment configuration and setup instructions
Introduced a new example-krops.nix file for deployment configuration, providing a template for machine-specific setups. Updated the .gitignore to include krops.nix, ensuring user-specific configurations are not tracked. Expanded the DEPLOYMENT-GUIDE.md with detailed initial setup instructions, including steps for creating and customizing krops.nix and machine configurations, enhancing the onboarding process for new users.
2025-10-12 08:25:10 +02:00
d794cf4394 Enhance deployment configuration with machine-specific templates and secrets management
Updated the .gitignore to include machine-specific configurations and secrets handling. Expanded the DEPLOYMENT-GUIDE.md to provide detailed instructions for adding new machines using a template, along with steps for managing encrypted secrets. Introduced example configuration files for boot settings and a sample WireGuard service, improving modularity and flexibility in the NixOS deployment process. Adjusted krops.nix to reference the correct path for machine-specific configurations.
2025-10-12 08:16:43 +02:00
78dcba25ec FIX: directory permissions and symlink management
Updated the lnbits.nix configuration to set appropriate permissions on the extensions directory and create a symlink for LNBits extensions, improving security and functionality.
2025-10-12 07:35:28 +02:00
aa0381c42b Refactor LNBits configuration to utilize flake imports and enhance modularity
Updated the lnbits.nix configuration to import the LNBits service module from a flake, improving maintainability and alignment with deployment practices. Adjusted the shared configuration to make the 'domain' parameter accessible to all imported modules, and removed the deprecated lnbits-service.nix file to streamline the setup.
2025-10-11 10:28:58 +02:00
30209458f7 Add support for handling machine-specific secrets in the deployment process
Expanded the DEPLOYMENT-GUIDE.md to include a comprehensive section on managing encrypted secrets using Passage and Pass. Detailed steps for setting up, creating, and deploying machine-specific secrets, along with security notes. Updated krops.nix and config/lnbits.nix to include configurations for deploying custom LNBits extensions, enhancing the flexibility and security of the NixOS deployment process.
2025-10-10 01:15:42 +02:00
d27bdd3005 Add machine-specific service configuration for WireGuard and related templates
Introduced a comprehensive guide for adding machine-specific services in the DEPLOYMENT-GUIDE.md, including steps to configure WireGuard for specific machines. Added example configuration files for boot settings, machine-specific configurations, and an example service for WireGuard. This enhances the modularity and flexibility of the NixOS deployment process, allowing for tailored configurations per machine.
2025-10-10 00:49:22 +02:00
c2b9eac973 Add lnbits to .gitignore 2025-10-09 22:38:42 +02:00
4170340d28 Update Nix configuration to use git-based nixpkgs and adjust module imports
Modified krops.nix to switch to a git-based nixpkgs source, noting the initial download cost. Updated shared.nix to change module imports to absolute paths and enabled experimental Nix features. Adjusted configuration.nix to import shared configuration from an absolute path and updated the domain name for machine1. These changes enhance clarity, maintainability, and functionality in the NixOS setup.
2025-10-09 22:38:42 +02:00
12 changed files with 852 additions and 155 deletions

27
.gitignore vendored
View file

@ -4,3 +4,30 @@ dist/
result
machine-specific
web-app
lnbits
lnbits-extensions
# User-specific deployment configuration
# Copy example-krops.nix to krops.nix and customize
krops.nix
# User-specific build configuration
# Copy example-build-local.nix to build-local.nix and customize
build-local.nix
# Machine-specific configurations (user creates these)
# Keep example-machine as a template
config/machines/*
!config/machines/example-machine/
# Secrets - only ignore unencrypted secrets
# Encrypted .age files are SAFE to commit
secrets/**/!(*.age)
secrets/**/*.txt
secrets/**/*.key
secrets/**/*.pem
secrets/**/*.env
# Age/Passage identity files (NEVER commit these!)
.passage/
identities

View file

@ -7,30 +7,90 @@ This setup builds the web-app **locally** with machine-specific configuration, t
- Machine-specific `.env` files
- Machine-specific images in the `public` folder
## Initial Setup
When you first clone this repository, you need to set up your local configuration:
### 1. Create your configuration files
```bash
# Copy the example templates
cp example-krops.nix krops.nix
cp example-build-local.nix build-local.nix
```
### 2. Create your first machine configuration
```bash
# Copy the example machine template
cp -r config/machines/example-machine config/machines/my-machine
# Edit the configuration
# - Change the domain in configuration.nix
# - Add your hardware-configuration.nix (from nixos-generate-config)
```
### 3. Create machine-specific web-app assets (if deploying web-app)
```bash
mkdir -p machine-specific/my-machine/env
mkdir -p machine-specific/my-machine/images
# Add your .env file and images
# See machine-specific/example-machine/ for reference
```
### 4. Update krops.nix and build-local.nix
**In `krops.nix`:**
- Replace `example-machine` with your machine name
- Update the SSH target (`root@your-host`)
- Add to the `inherit` list and `all` script
**In `build-local.nix`:**
- Replace `example-machine` with your machine name
- Add to the `all` build script
### 5. Build and deploy!
```bash
# Build web-app locally (if using web-app)
nix-build ./build-local.nix -A my-machine && ./result/bin/build-my-machine
# Deploy to target machine
nix-build ./krops.nix -A my-machine && ./result
```
**Note:** Your `krops.nix`, `build-local.nix`, and machine configs in `config/machines/*` are gitignored. You can safely pull updates without overwriting your local configuration.
## Structure
```
.
├── web-app/ # Shared web-app source code
│ ├── package.json
│ ├── index.html
│ └── public/ # Base public folder
├── machine-specific/
│ ├── machine1/
│ │ ├── env/.env # Machine1's environment file
│ │ └── images/ # Machine1's images
│ │ ├── logo.png
│ │ └── banner.jpg
├── config/ # NixOS configuration files
│ ├── shared.nix # Shared config for all machines
│ ├── nginx.nix # Nginx configuration
│ ├── lnbits.nix # LNBits configuration
│ ├── pict-rs.nix # Pict-rs configuration
│ └── machines/ # Machine-specific configs (gitignored)
│ ├── example-machine/ # Template (committed to git)
│ │ ├── configuration.nix # Main config entry point
│ │ ├── boot.nix # Bootloader settings
│ │ └── example-service.nix # Service examples
│ ├── machine1/ # Your machines (gitignored)
│ └── machine2/ # Your machines (gitignored)
├── web-app/ # Shared web-app source (symlink)
├── machine-specific/ # Machine-specific web-app assets (symlink)
├── lnbits/ # LNBits source (symlink)
├── secrets/ # Encrypted secrets
│ ├── example-machine/
│ │ └── README.md # Secrets usage guide
│ ├── machine1/ # Machine-specific secrets
│ │ └── *.age # Encrypted with age
│ └── machine2/
│ ├── env/.env # Machine2's environment file
│ └── images/ # Machine2's images
│ ├── logo.png
│ └── banner.jpg
├── build/ # Generated locally (gitignored)
│ ├── machine1/dist/ # Built files for machine1
│ └── machine2/dist/ # Built files for machine2
├── build-local.nix # Local build scripts
└── krops.nix # Deployment configuration
├── build/ # Generated locally (gitignored)
├── build-local.nix # Local build scripts
└── krops.nix # Deployment configuration
```
## How It Works
@ -83,10 +143,31 @@ nix-build ./krops.nix -A all && ./result
### Add a new machine
1. Create directories: `machine-specific/machine3/env/` and `machine-specific/machine3/images/`
2. Add `.env` file and images for machine3
3. Create `config/machine3/configuration.nix`
4. Add machine3 to `build-local.nix` and `krops.nix`
1. **Copy the example template:**
```bash
cp -r config/machines/example-machine config/machines/my-new-machine
```
2. **Edit the configuration:**
- Open `config/machines/my-new-machine/configuration.nix`
- Change `domain = "example.com"` to your domain
- Add your `hardware-configuration.nix` (from `nixos-generate-config`)
3. **Create machine-specific web-app assets** (if using web-app):
```bash
mkdir -p machine-specific/my-new-machine/env
mkdir -p machine-specific/my-new-machine/images
# Add .env file and images
```
4. **Add to krops.nix and build-local.nix:**
- Add `my-new-machine` configuration to both files
5. **Build and deploy:**
```bash
nix-build ./build-local.nix -A my-new-machine && ./result/bin/build-my-new-machine
nix-build ./krops.nix -A my-new-machine && ./result
```
### Update environment variables
@ -102,3 +183,161 @@ Edit files in `web-app/`, then rebuild locally
After any changes: rebuild locally, then redeploy.
## Adding Machine-Specific Services
Sometimes you need services that only run on certain machines (e.g., WireGuard on machine1 but not machine2).
### Using the Example Template
A complete example machine configuration is provided in `config/example-machine/`:
```
config/example-machine/
├── configuration.nix # Template with domain parameter
├── boot.nix # Bootloader configuration examples
└── example-service.nix # WireGuard and other service examples
```
**To use the template:**
1. Copy the `example-machine` directory to your new machine name:
```bash
cp -r config/example-machine config/my-new-machine
```
2. Edit `configuration.nix` to set your domain
3. Copy your `hardware-configuration.nix` from `nixos-generate-config`
4. Customize `boot.nix` for your bootloader (UEFI or BIOS)
5. Modify or remove `example-service.nix` as needed
6. Add the machine to `build-local.nix` and `krops.nix`
### Example: Machine1 has WireGuard
**Structure:**
```
config/
├── shared.nix # Shared config for all machines
├── machine1/
│ ├── configuration.nix # Imports shared.nix + machine-specific modules
│ ├── wireguard.nix # Machine1-specific service
│ ├── hardware-configuration.nix
│ └── boot.nix
└── machine2/
├── configuration.nix # Only imports shared.nix
├── hardware-configuration.nix
└── boot.nix
```
### Steps to Add a Machine-Specific Service
1. **Create a service configuration file** in the machine's directory:
```bash
# Example: config/machine1/wireguard.nix
{ config, lib, pkgs, ... }:
{
networking.wireguard.interfaces = {
wg0 = {
privateKeyFile = "/etc/wireguard/privatekey";
ips = [ "10.0.0.2/24" ];
peers = [ ... ];
};
};
}
```
2. **Import it in the machine's configuration.nix**:
```nix
# config/machine1/configuration.nix
{ config, pkgs, ... }:
{
imports = [
(import /var/src/config-shared {
inherit config pkgs;
domain = "4lpaca.io";
})
./hardware-configuration.nix
./boot.nix
./wireguard.nix # ← Add your service here
];
}
```
3. **Deploy** - the service will only be deployed to that specific machine:
```bash
nix-build ./krops.nix -A machine1 && ./result
```
### Common Machine-Specific Services
- **WireGuard VPN** - Only on machines that need VPN access
- **Backup services** - Different backup targets per machine
- **Development tools** - Extra packages for staging vs production
- **Custom hardware drivers** - GPU drivers, specific hardware support
The key is that each machine's `configuration.nix` can import different modules while still sharing common configuration through `shared.nix`.
## Deploying LNBits Extensions
You can deploy custom LNBits extensions to `/var/lib/lnbits/extensions` on your target machines.
### Setup
**1. Create extensions directory:**
```bash
mkdir -p lnbits-extensions
```
**2. Add your custom extensions:**
```bash
# Example: Clone a custom extension
git clone https://github.com/your-org/custom-extension lnbits-extensions/custom-extension
```
**3. Enable in krops.nix:**
Uncomment the lnbits-extensions line:
```nix
lnbits-extensions.file = toString ./lnbits-extensions;
```
**4. Enable in config/lnbits.nix:**
Choose one of two options:
**Option 1: Replace extensions directory** (use if you manage ALL extensions via deployment)
```nix
systemd.tmpfiles.rules = [
"L+ /var/lib/lnbits/extensions - - - - /var/src/lnbits-extensions"
];
```
⚠️ **Warning:** This will DELETE any extensions installed via the LNBits UI!
**Option 2: Merge deployed extensions** (safer - keeps UI-installed extensions)
```nix
systemd.services.lnbits-copy-extensions = {
description = "Copy deployed LNBits extensions";
before = [ "lnbits.service" ];
wantedBy = [ "lnbits.service" ];
serviceConfig = {
Type = "oneshot";
ExecStart = "${pkgs.rsync}/bin/rsync -av /var/src/lnbits-extensions/ /var/lib/lnbits/extensions/";
};
};
```
**5. Deploy:**
```bash
nix-build ./krops.nix -A machine1 && ./result
```
### How It Works
**Option 1 (Symlink):**
- Your `./lnbits-extensions` directory is deployed to `/var/src/lnbits-extensions`
- A symlink replaces `/var/lib/lnbits/extensions``/var/src/lnbits-extensions`
- Any existing extensions directory is deleted
- All extensions must be managed via deployment
**Option 2 (Copy/Merge):**
- Your `./lnbits-extensions` directory is deployed to `/var/src/lnbits-extensions`
- Deployed extensions are copied into `/var/lib/lnbits/extensions/`
- Existing UI-installed extensions are preserved
- You can mix deployed extensions with UI-installed ones

View file

@ -1,6 +1,14 @@
{ domain, pkgs, ... }:
{ domain, pkgs, config, lib, ... }:
let
lnbitsFlake = builtins.getFlake "path:/var/src/lnbits-src";
in
{
# Import the LNBits service module from the flake (following official guide pattern)
imports = [
"${lnbitsFlake}/nix/modules/lnbits-service.nix"
];
# LNBits service configuration
services.lnbits = {
enable = true;
@ -8,9 +16,12 @@
port = 5000;
openFirewall = true;
stateDir = "/var/lib/lnbits";
# Use lnbits from deployed flake source at /var/src/lnbits-src
package = (builtins.getFlake "path:/var/src/lnbits-src").packages.${pkgs.system}.lnbits;
# Use lnbits package from the flake
package = lnbitsFlake.packages.${pkgs.system}.lnbits;
env = {
# Custom extensions path (if deployed via krops)
# Extensions from /var/src/lnbits-extensions will be symlinked to /var/lib/lnbits/extensions
# LNBITS_EXTENSIONS_PATH = "/var/lib/lnbits/extensions";
LNBITS_ADMIN_UI = "true";
AUTH_ALLOWED_METHODS = "user-id-only, username-password";
LNBITS_BACKEND_WALLET_CLASS = "FakeWallet";
@ -81,4 +92,31 @@
};
};
};
# Deploy custom extensions
# WARNING: L+ will REPLACE /var/lib/lnbits/extensions if it already exists!
# This will DELETE any extensions installed via the LNBits UI.
#
# Option 1: Replace extensions directory entirely (use with caution)
systemd.tmpfiles.rules = [
# Set permissions on source directory so lnbits user can read it
"d /var/src/lnbits-extensions 0755 lnbits lnbits - -"
# Create symlink with proper ownership
"L+ /var/lib/lnbits/extensions - lnbits lnbits - /var/src/lnbits-extensions"
];
#
# Option 2: Manually merge deployed extensions with existing ones
# Copy deployed extensions into the extensions directory without replacing it:
# systemd.tmpfiles.rules = [
# "d /var/src/lnbits-extensions 0755 lnbits lnbits - -"
# ];
# systemd.services.lnbits-copy-extensions = {
# description = "Copy deployed LNBits extensions";
# before = [ "lnbits.service" ];
# wantedBy = [ "lnbits.service" ];
# serviceConfig = {
# Type = "oneshot";
# ExecStart = "${pkgs.rsync}/bin/rsync -av /var/src/lnbits-extensions/ /var/lib/lnbits/extensions/";
# };
# };
}

View file

@ -0,0 +1,13 @@
{
# Bootloader configuration
# This example uses systemd-boot for UEFI systems
# For BIOS systems, use GRUB instead
# UEFI boot loader (systemd-boot)
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
# Alternative: GRUB for BIOS systems
# boot.loader.grub.enable = true;
# boot.loader.grub.device = "/dev/sda"; # or "nodev" for UEFI
}

View file

@ -0,0 +1,23 @@
{ config, pkgs, ... }:
let
domain = "example.com";
in
{
imports = [
{ _module.args = { inherit domain; }; }
(import /var/src/config-shared {
inherit config pkgs domain;
})
# Import hardware-specific configuration
# This file is typically generated by nixos-generate-config
./hardware-configuration.nix
# Import boot configuration (bootloader settings)
./boot.nix
# Import any machine-specific services
# Comment out or remove if not needed
# ./example-service.nix
];
}

View file

@ -0,0 +1,71 @@
{ config, lib, pkgs, domain, ... }:
{
# Example: WireGuard VPN Service
# This is a machine-specific service that can be imported in configuration.nix
# Only machines that need WireGuard should import this file
# Install WireGuard tools
environment.systemPackages = with pkgs; [
wireguard-tools
];
# Configure WireGuard interface
networking.wireguard.interfaces = {
wg0 = {
# Generate keys with: wg genkey | tee privatekey | wg pubkey > publickey
# Store the private key securely on the target machine
privateKeyFile = "/etc/wireguard/privatekey";
# VPN IP address for this machine
ips = [ "10.0.0.2/24" ];
# VPN peers (other machines or VPN server)
peers = [
{
# Public key of the peer
publicKey = "PEER_PUBLIC_KEY_HERE";
# Which IPs should be routed through this peer
allowedIPs = [ "10.0.0.1/32" ];
# Endpoint address and port of the peer
endpoint = "vpn.example.com:51820";
# Send keepalive packets every 15 seconds
persistentKeepalive = 15;
}
];
};
};
# Optional: Systemd service optimizations
systemd.services."wireguard-wg0".serviceConfig = {
# Restart the service if it fails
Restart = "on-failure";
RestartSec = "5s";
};
# Other example services you might add:
# Example: Custom backup service
# services.restic.backups.daily = {
# user = "root";
# repository = "s3:s3.amazonaws.com/my-backup-bucket";
# passwordFile = "/etc/restic/password";
# paths = [ "/var/lib" "/home" ];
# timerConfig = { OnCalendar = "daily"; };
# };
# Example: Development tools (for staging environments)
# environment.systemPackages = with pkgs; [
# vim
# git
# htop
# tmux
# ];
# Example: Custom firewall rules
# networking.firewall.allowedTCPPorts = [ 8080 ];
# networking.firewall.allowedUDPPorts = [ 51820 ];
}

0
config/modules/.gitkeep Normal file
View file

View file

@ -1,123 +0,0 @@
{ config, pkgs, lib, ... }:
let
defaultUser = "lnbits";
cfg = config.services.lnbits;
inherit (lib) mkOption mkIf types optionalAttrs literalExpression;
in
{
options = {
services.lnbits = {
enable = mkOption {
default = false;
type = types.bool;
description = ''
Whether to enable the lnbits service
'';
};
openFirewall = mkOption {
type = types.bool;
default = false;
description = ''
Whether to open the ports used by lnbits in the firewall for the server
'';
};
package = mkOption {
type = types.package;
defaultText = literalExpression "pkgs.lnbits";
default = pkgs.lnbits;
description = ''
The lnbits package to use.
'';
};
stateDir = mkOption {
type = types.path;
default = "/var/lib/lnbits";
description = ''
The lnbits state directory
'';
};
host = mkOption {
type = types.str;
default = "127.0.0.1";
description = ''
The host to bind to
'';
};
port = mkOption {
type = types.port;
default = 8231;
description = ''
The port to run on
'';
};
user = mkOption {
type = types.str;
default = "lnbits";
description = "user to run lnbits as";
};
group = mkOption {
type = types.str;
default = "lnbits";
description = "group to run lnbits as";
};
env = mkOption {
type = types.attrsOf types.str;
default = {};
description = ''
Additional environment variables that are passed to lnbits.
Reference Variables: https://github.com/lnbits/lnbits/blob/dev/.env.example
'';
example = {
LNBITS_ADMIN_UI = "true";
};
};
};
};
config = mkIf cfg.enable {
users.users = optionalAttrs (cfg.user == defaultUser) {
${defaultUser} = {
isSystemUser = true;
group = defaultUser;
};
};
users.groups = optionalAttrs (cfg.group == defaultUser) {
${defaultUser} = { };
};
systemd.tmpfiles.rules = [
"d ${cfg.stateDir} 0700 ${cfg.user} ${cfg.group} - -"
"d ${cfg.stateDir}/data 0700 ${cfg.user} ${cfg.group} - -"
];
systemd.services.lnbits = {
enable = true;
description = "lnbits";
wantedBy = [ "multi-user.target" ];
after = [ "network-online.target" ];
environment = lib.mkMerge [
{
LNBITS_DATA_FOLDER = "${cfg.stateDir}/data";
# LNBits automatically appends '/extensions' to this path
LNBITS_EXTENSIONS_PATH = "${cfg.stateDir}";
}
cfg.env
];
serviceConfig = {
User = cfg.user;
Group = cfg.group;
WorkingDirectory = "${cfg.package}/lib/python3.12/site-packages";
StateDirectory = "lnbits";
ExecStart = "${cfg.package}/bin/lnbits --port ${toString cfg.port} --host ${cfg.host}";
Restart = "always";
PrivateTmp = true;
};
};
networking.firewall = mkIf cfg.openFirewall {
allowedTCPPorts = [ cfg.port ];
};
};
}

View file

@ -2,16 +2,19 @@
{
imports = [
./nginx.nix
./modules/lnbits-service.nix
{ _module.args = { inherit domain; }; }
./pict-rs.nix
./lnbits.nix
# Note: 'domain' is made available via _module.args in the machine's configuration.nix
# It's passed to this module and propagated to all imports automatically
/var/src/config-nginx
/var/src/config-pict-rs
/var/src/config-lnbits
];
# Set hostname (replace dots with hyphens, e.g., "demo.ariege.io" → "demo-ariege-io")
networking.hostName = builtins.replaceStrings ["."] ["-"] domain;
nix.settings.experimental-features = [ "nix-command" "flakes" ];
# System packages
environment.systemPackages = with pkgs; [
vim

View file

@ -0,0 +1,264 @@
# How the LNBits Flake Works
## Overview
This document explains how the LNBits flake deployment works, particularly how it achieves the equivalent of running `uv run lnbits` on the deployed NixOS machine.
## Architecture
The LNBits flake uses `uv2nix` to convert `uv`'s lock file into a reproducible Nix build, creating a Python virtual environment that can be deployed as a NixOS service.
## Key Components
### 1. uv2nix: Converting uv.lock to Nix
The flake uses `uv2nix` to read the `uv.lock` file and create a reproducible Nix build:
```nix
# Read uv.lock and pyproject.toml
workspace = uv2nix.lib.workspace.loadWorkspace { workspaceRoot = ./.; };
# Create overlay preferring wheels (faster than building from source)
uvLockedOverlay = workspace.mkPyprojectOverlay { sourcePreference = "wheel"; };
```
This converts the uv-managed dependencies into Nix packages.
### 2. Building a Python Virtual Environment
Instead of `uv` creating a venv at runtime, Nix creates one during the build:
```nix
# Build venv with all dependencies from uv.lock
runtimeVenv = pythonSet.mkVirtualEnv "${projectName}-env" workspace.deps.default;
```
This creates an immutable virtual environment in `/nix/store/...-lnbits-env` with all Python packages installed from the locked dependencies.
### 3. The Wrapper Script (Equivalent to `uv run`)
The flake creates a wrapper that mimics `uv run lnbits`:
```nix
lnbitsApp = pkgs.writeShellApplication {
name = "lnbits";
text = ''
export SSL_CERT_FILE=${pkgs.cacert}/etc/ssl/certs/ca-bundle.crt
export REQUESTS_CA_BUNDLE=$SSL_CERT_FILE
export PYTHONPATH="$PWD:${PYTHONPATH:-}"
exec ${runtimeVenv}/bin/lnbits "$@"
'';
};
```
This wrapper:
- Sets up SSL certificates for HTTPS requests
- Adds the current directory to `PYTHONPATH` (so it can find local source files)
- Executes the `lnbits` binary from the built venv
### 4. The NixOS Service Module
The service module (`lnbits-service.nix`) configures systemd to run LNBits:
```nix
# The actual command that runs
ExecStart = "${lib.getExe cfg.package} --port ${toString cfg.port} --host ${cfg.host}";
# Environment variables
environment = {
LNBITS_DATA_FOLDER = "${cfg.stateDir}"; # /var/lib/lnbits
LNBITS_EXTENSIONS_PATH = "${cfg.stateDir}/extensions";
LNBITS_PATH = "${cfg.package.src}"; # Points to source
}
```
Key points:
- `cfg.package` = the venv from the flake (`runtimeVenv`)
- `lib.getExe cfg.package` = extracts the executable path: `/nix/store/xxx-lnbits-env/bin/lnbits`
- `cfg.package.src` = points back to the LNBits source directory for templates/static files
### 5. Flake Outputs
The flake exposes the venv as a package:
```nix
packages.default = runtimeVenv;
packages.${projectName} = runtimeVenv; # packages.lnbits = runtimeVenv
```
## How Your Deployment Uses It
In your `config/lnbits.nix`:
```nix
package = (builtins.getFlake "path:/var/src/lnbits-src").packages.${pkgs.system}.lnbits;
```
This breaks down as:
1. `builtins.getFlake "path:/var/src/lnbits-src"` - Loads the flake from the deployed source
2. `.packages` - Accesses the packages output from the flake
3. `.${pkgs.system}` - Selects the right system architecture (e.g., `x86_64-linux`)
4. `.lnbits` - Gets the `lnbits` package (which equals `runtimeVenv`)
## Understanding Flake References
The **flake reference format** is crucial to understanding how this works:
### Local Path Reference
```nix
builtins.getFlake "path:/var/src/lnbits-src"
```
- Uses files from the local filesystem at `/var/src/lnbits-src`
- The `.src` attribute points to `/var/src/lnbits-src`
- Files are mutable - you can edit them
- Requires deploying the full source tree via krops
### GitHub Reference
```nix
builtins.getFlake "github:lnbits/lnbits/main"
```
- Nix fetches the repository from GitHub
- Stores it in `/nix/store/xxx-source/` (read-only)
- The `.src` attribute points to `/nix/store/xxx-source`
- Files are immutable
- No need to deploy source separately
### Comparison
| Aspect | `path:/var/src/lnbits-src` | `github:lnbits/lnbits` |
|--------|---------------------------|------------------------|
| **Source location** | `/var/src/lnbits-src` | `/nix/store/xxx-source` |
| **Mutable?** | Yes - can edit files | No - read-only |
| **Deployment** | Deploy via krops | Built-in to Nix |
| **Updates** | Redeploy source | Change flake ref |
| **Local changes** | Supported | Not possible |
## Why Deploy the Full Source?
The entire `lnbits` folder must be copied to `/var/src/lnbits-src` because:
### 1. Build Time Requirements
The flake needs these files to build the venv:
- `flake.nix` - Defines how to build the venv
- `uv.lock` - Contains locked dependency versions
- `pyproject.toml` - Defines project metadata
### 2. Runtime Requirements
LNBits needs the source tree at runtime for:
- Python modules in `lnbits/`
- HTML templates
- Static files (CSS, JavaScript, images)
- Extension loading system
## Directory Structure
### On the Deployed Machine
```
/var/src/lnbits-src/ ← Full source deployed by krops
├── flake.nix ← Used to build venv
├── uv.lock ← Used to build venv
├── pyproject.toml ← Used to build venv
└── lnbits/ ← Used at runtime
├── templates/
├── static/
└── ...
/nix/store/xxx-lnbits-env/ ← Built venv (Python packages only)
├── bin/lnbits ← Executable
└── lib/python3.12/... ← Dependencies
```
### At Runtime
The systemd service:
- Runs: `/nix/store/xxx-lnbits-env/bin/lnbits`
- With: `LNBITS_PATH=/var/src/lnbits-src` (to find templates/static/etc)
- With: `WorkingDirectory=/var/src/lnbits-src`
## Comparison: `uv run` vs Nix Flake
### Traditional `uv run lnbits`
```bash
cd /path/to/lnbits
uv run lnbits --port 5000 --host 0.0.0.0
```
This:
1. Reads `uv.lock`
2. Creates/updates a venv in `.venv/`
3. Installs dependencies if needed
4. Runs `lnbits` from the venv
5. Uses current directory for source files
### Nix Flake Approach
```nix
package = (builtins.getFlake "path:/var/src/lnbits-src").packages.${pkgs.system}.lnbits;
```
This:
1. ✅ Reads `uv.lock` via `uv2nix`
2. ✅ Creates a venv in `/nix/store` (immutable)
3. ✅ All dependencies are locked and reproducible
4. ✅ Runs `/nix/store/xxx-lnbits-env/bin/lnbits`
5. ✅ Sets `LNBITS_PATH` to source directory for templates/static/extensions
6. ✅ Runs as a systemd service with proper user/permissions
7. ✅ No runtime dependency on `uv` itself
### Key Differences
| Aspect | `uv run` | Nix Flake |
|--------|----------|-----------|
| **Venv location** | `.venv/` in source | `/nix/store/xxx-env` |
| **Mutability** | Mutable | Immutable |
| **Reproducibility** | Lock file only | Full Nix derivation |
| **Service management** | Manual | systemd integration |
| **Dependency on uv** | Required at runtime | Only at build time |
## The `.src` Attribute Mystery
A common question: where is `cfg.package.src` defined?
### Answer: It's Automatic
The `.src` attribute is **not explicitly defined** - it's automatically set by Nix when loading a flake:
```nix
# When you do this:
builtins.getFlake "path:/var/src/lnbits-src"
# Nix automatically:
# 1. Reads the flake at /var/src/lnbits-src
# 2. Evaluates it and builds outputs
# 3. Adds .src = /var/src/lnbits-src to the package
```
This is a built-in Nix flake feature - packages inherit the source location from where the flake was loaded.
## Summary
The LNBits flake deployment:
1. **Converts uv dependencies to Nix** using `uv2nix`
2. **Builds an immutable venv** in `/nix/store`
3. **Deploys full source** to `/var/src/lnbits-src` via krops
4. **Loads the flake** from the deployed source using `path:/var/src/lnbits-src`
5. **Runs as a systemd service** with proper environment variables pointing to the source
This provides:
- ✅ **Reproducibility** - exact same dependencies every time
- ✅ **Declarative configuration** - everything in `configuration.nix`
- ✅ **Source mutability** - can edit files in `/var/src/lnbits-src`
- ✅ **No uv dependency** - service doesn't need `uv` at runtime
- ✅ **Proper service management** - systemd integration with user permissions
The key insight is that **`path:` vs `github:` in the flake reference** determines whether you use local deployed files or Nix fetches from a remote repository.

59
example-build-local.nix Normal file
View file

@ -0,0 +1,59 @@
let
pkgs = import <nixpkgs> {};
# Build script for a specific machine
buildForMachine = name: pkgs.writeShellScriptBin "build-${name}" ''
set -e
BUILD_DIR="./build/${name}"
echo "Building web-app for ${name}..."
# Clean and create build directory
rm -rf "$BUILD_DIR"
mkdir -p "$BUILD_DIR"
# Copy web-app source
cp -r ./web-app/* "$BUILD_DIR/"
# Copy machine-specific .env
echo "Copying machine-specific .env..."
cp ./machine-specific/${name}/env/.env "$BUILD_DIR/.env"
# Copy machine-specific images to public folder
echo "Copying machine-specific images to public..."
cp -r ./machine-specific/${name}/images/* "$BUILD_DIR/public/"
# Copy machine-specific logo to assets
echo "Copying machine-specific logo to assets..."
mkdir -p "$BUILD_DIR/src/assets"
cp ./machine-specific/${name}/images/logo.png "$BUILD_DIR/src/assets/logo.png"
# Build the web-app
echo "Running build..."
cd "$BUILD_DIR"
${pkgs.nodejs}/bin/npm run build
echo "Build complete for ${name}! Output in $BUILD_DIR/dist"
'';
in {
# Example machine build (copy this line for each machine)
# Replace "example-machine" with your machine name
example-machine = buildForMachine "example-machine";
# Add more machines here:
# machine1 = buildForMachine "machine1";
# machine2 = buildForMachine "machine2";
# Build all machines (update this list with your machines)
all = pkgs.writeShellScriptBin "build-all" ''
set -e
echo "Building for all machines..."
${(buildForMachine "example-machine")}/bin/build-example-machine
# Add your machines here:
# ${(buildForMachine "machine1")}/bin/build-machine1
# ${(buildForMachine "machine2")}/bin/build-machine2
echo "All builds complete!"
'';
}

83
example-krops.nix Normal file
View file

@ -0,0 +1,83 @@
let
krops = builtins.fetchGit {
url = "https://cgit.krebsco.de/krops/";
ref = "master";
};
lib = import "${krops}/lib";
pkgs = import "${krops}/pkgs" {};
# Define sources for each machine
source = name: lib.evalSource [
{
# NixOS configuration entry point
nixos-config.symlink = "config-machine/configuration.nix";
# Use nixpkgs from local NIX_PATH (much smaller than git clone)
# This copies your local <nixpkgs> without .git history (~400MB vs 6GB)
nixpkgs.file = {
path = toString <nixpkgs>;
useChecksum = true;
};
# Shared configuration files (only shared modules and files)
config-shared.file = toString ./config/shared.nix;
config-modules.file = toString ./config/modules;
config-nginx.file = toString ./config/nginx.nix;
config-pict-rs.file = toString ./config/pict-rs.nix;
config-lnbits.file = toString ./config/lnbits.nix;
# Machine-specific configuration files (only this machine's config)
config-machine.file = toString (./config/machines + "/${name}");
# Pre-built web-app (built locally with machine-specific config)
web-app-dist.file = toString (./build + "/${name}/dist");
# LNBits flake source
lnbits-src.file = toString ./lnbits;
# LNBits extensions (deployed to /var/lib/lnbits/extensions)
# Uncomment if you have custom extensions to deploy
# lnbits-extensions.file = toString ./lnbits-extensions;
}
];
# Example machine deployment (copy this block for each machine)
# Replace "example-machine" with your machine name
# Replace "root@your-host" with your SSH target
example-machine = pkgs.krops.writeDeploy "deploy-example-machine" {
source = source "example-machine";
target = "root@your-host-or-ip";
# Avoid having to create a sentinel file.
# Otherwise /var/src/.populate must be created on the target node to signal krops
# that it is allowed to deploy.
force = true;
};
# Add more machines here following the same pattern:
# machine1 = pkgs.krops.writeDeploy "deploy-machine1" {
# source = source "machine1";
# target = "root@machine1-host";
# force = true;
# };
# Deploy to all machines (update this list with your machines)
all = pkgs.writeScript "deploy-all" ''
#!${pkgs.bash}/bin/bash
set -e
echo "Deploying to example-machine..."
${example-machine}
# Add your machines here:
# echo "Deploying to machine1..."
# ${machine1}
echo "All deployments completed!"
'';
in {
# Export your machine deployments here
inherit example-machine all;
# Add your machines to the inherit list:
# inherit example-machine machine1 machine2 all;
}