OpenDrive is a multi-tenant internal drive built with Elixir, Phoenix LiveView, SQLite metadata, and pluggable blob storage behind OpenDrive.Storage.
The application is organized around workspaces (tenants). Each authenticated user operates inside one active workspace and all drive reads and writes are scoped by tenant_id.
Current product surface:
- Email/password authentication plus magic-link login support
- Workspace creation during registration
- Multi-tenant membership model with
owner,admin, andmemberroles - Tenant switcher for users who belong to multiple workspaces
- Folder tree navigation
- File upload with direct-to-storage flow and backend proxy fallback
- Authenticated single-file download and multi-file ZIP download
- Soft delete, trash listing, restore, and permanent empty-trash cleanup
- Audit log entries for key tenant, membership, and drive actions
- Elixir
~> 1.15 - Phoenix
~> 1.8.5 - Phoenix LiveView
~> 1.1 - Ecto + SQLite via
ecto_sqlite3 - Tailwind CSS + esbuild
- S3-compatible storage adapter plus a fake local adapter for development and tests
Main modules:
OpenDrive.Accounts: registration, authentication, session tokens, password/email updates, scope bootstrapOpenDrive.Tenancy: workspace creation, membership listing, member management, scope resolutionOpenDrive.Drive: folders, files, uploads, renames, downloads, trash, restore, ZIP assemblyOpenDrive.Audit: tenant-scoped audit event persistenceOpenDrive.Storage: blob storage facade with adapter-based implementation
Relevant web entrypoints:
lib/open_drive_web/router.exlib/open_drive_web/live/drive_live/index.exlib/open_drive_web/live/members_live/index.exlib/open_drive_web/live/trash_live/index.exlib/open_drive_web/controllers/direct_upload_controller.exlib/open_drive_web/controllers/file_download_controller.ex
The current schema is composed of:
usersusers_tokenstenantsmembershipsfoldersfile_objectsfilesaudit_events
Important storage and integrity rules:
- Tenant slug uniqueness is scoped by
owner_user_id, not globally - Active folders must have unique names within the same tenant + parent folder
- Active files must have unique names within the same tenant + folder
- Soft-deleted folders/files do not block reuse of the same name
- Elixir
~> 1.15 - Erlang/OTP compatible with Phoenix 1.8
- Node.js is not required globally; Tailwind and esbuild are installed through Mix tasks
mix setup
mix phx.serverThen open http://localhost:4000.
mix setup runs:
- dependency install
- database creation and migrations
priv/repo/seeds.exs- Tailwind/esbuild installation
- asset build
At the moment, priv/repo/seeds.exs is only a placeholder, so the first user/workspace is created through the registration flow.
mix test
mix precommit
mix ecto.reset
mix assets.build- Anonymous users land on
/ - Registration creates both the user and the first workspace in a single transaction
- Authenticated users are redirected to
/app - Users with more than one membership can switch the active workspace via
/app/switch-tenant - Member management is limited to owners and admins
- Adding a member requires that the invited email already exists as a registered OpenDrive user
Uploads support two paths:
- Direct upload preparation through
POST /app/uploads - Backend proxy upload through
POST /app/uploads/proxy
Direct uploads are signed with a Phoenix token and finalized through POST /app/uploads/complete.
Current operational limits from the code:
- Maximum upload size:
2 GB - Backend proxy fallback threshold constant:
100 MB - ZIP download limit:
100 files - ZIP download total size limit:
500 MB
Downloads support:
- Single file redirect through a presigned download URL
- ZIP generation for selected files
By default, development and tests use OpenDrive.Storage.Fake.
To enable the S3-compatible adapter at runtime:
export OPEN_DRIVE_STORAGE_ADAPTER=s3
export AWS_S3_BUCKET=your-bucket
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export AWS_REGION=us-east-1Optional custom endpoint variables:
export AWS_S3_HOST=localhost
export AWS_S3_PORT=9000
export AWS_S3_SCHEME=http://You can also place these variables in .env.local at the project root. config/runtime.exs loads that file automatically outside the test environment without overriding shell-exported variables.
Default database files:
- Development:
open_drive_dev.db - Test:
open_drive_test.db - Production fallback when
DATABASE_PATHis set: custom path - Production fallback without
DATABASE_PATH:/tmp/open_drive.db
To override the runtime database path:
export DATABASE_PATH=/absolute/path/to/open_drive.dbThe repository now includes a production Dockerfile and a starter Kamal config in config/deploy.yml.
Current production assumptions:
- Phoenix runs as a release on port
4000 - SQLite lives in
/data/open_drive.dbinside the container - Kamal mounts the host path
/var/lib/open_driveinto/data - Blob storage stays on S3 via
OPEN_DRIVE_STORAGE_ADAPTER=s3 - Health checks use
GET /up
Before the first deploy:
mix phx.gen.secretPut the generated value plus your AWS credentials in .kamal/secrets:
SECRET_KEY_BASE=...
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...Adjust these placeholders in config/deploy.yml:
proxy.hostservers.web.hostsenv.clear.PHX_HOSTenv.clear.AWS_S3_BUCKETssh.userif your server user is notubuntu
If you want to build remotely through a Docker host over SSH, you can still run:
DOCKER_HOST=ssh://your-server-alias kamal setup
DOCKER_HOST=ssh://your-server-alias kamal deployTypical first-run flow:
kamal setup
kamal deployProject checks are grouped in:
mix precommitThat alias runs:
- compile with warnings as errors
mix deps.unlock --unused- format check
- Credo strict mode
- tests
If you want the same gate before each push:
git config core.hooksPath .githooks- Keep tenant-aware behavior scoped through the current workspace context
- Prefer changing the smallest layer that solves the problem
- Preserve blob handling behind
OpenDrive.Storage - For root-level UI metadata and global assets, start with
lib/open_drive_web/components/layouts/root.html.heex - For front-end work, keep the current Phoenix, Tailwind, and LiveView structure unless there is a concrete reason to refactor
