Compare commits

...

5 Commits

Author SHA1 Message Date
e9ac68082b Update Readme.md
Included container build steps
2025-08-14 03:49:16 +00:00
Dad
b0c77e3613 Updated Dockerfile 2025-08-13 23:44:08 -04:00
Dad
712a98f77c Initial commit 2025-08-11 21:35:17 -04:00
Dad
298b1c2489 Editing Readme.md file 2025-08-11 21:31:11 -04:00
Dad
22cd6bde60 Initial commit 2025-08-11 21:26:00 -04:00
3 changed files with 108 additions and 0 deletions

51
Dockerfile Normal file
View File

@ -0,0 +1,51 @@
FROM nvcr.io/nvidia/cuda:12.9.1-cudnn-runtime-ubuntu22.04
# Install Python 3.11 and system dependencies
RUN apt-get update && apt-get install -y \
python3.11 python3-pip git curl libgl1 libglib2.0-0 ffmpeg \
&& curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && \
apt-get install -y nodejs \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3.11 1 && \
update-alternatives --install /usr/bin/pip pip /usr/bin/pip3 1
RUN python -m pip install --upgrade pip
# Set working directory
WORKDIR /app
# Clone Open WebUI
RUN git clone https://github.com/open-webui/open-webui.git .
# Patch env.py for Python 3.11 compatibility
RUN sed -i 's/logging.getLevelNamesMapping()/logging._nameToLevel/' /app/backend/open_webui/env.py
# Set Node.js memory limit for build
ENV NODE_OPTIONS="--max_old_space_size=8192"
# Build frontend
WORKDIR /app/frontend
RUN npm install y-protocols --legacy-peer-deps
RUN npm install --legacy-peer-deps
# Build the frontend with verbose logging
RUN npm run build --verbose || true
# Change ownership of the backend directory
RUN chown -R 1001:1001 /app/backend
# Install backend dependencies
WORKDIR /app/backend
RUN python -m pip install --no-cache-dir -r requirements.txt uvicorn
# Install PyTorch with CUDA 12.x support
RUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# Set the DATABASE_URL environment variable (uncomment if needed)
# ENV DATABASE_URL='sqlite:////home/llm/open-webui/database.db'
# RUN python -m peewee_migrate migrate
EXPOSE 3000
CMD ["uvicorn", "open_webui.main:app", "--host", "0.0.0.0", "--port", "3000"]

45
Readme.md Normal file
View File

@ -0,0 +1,45 @@
**home-llm Docker Compose**
So far, just the docker-compose.yml file used to pull and run the containers for ollama and open-webui.
**nvidia-container Installation**
Step 1: Download the NVIDIA Docker Packages
Download the NVIDIA Container Toolkit and its dependencies:
You can use the following commands to download the necessary packages. Make sure to adjust the version numbers if needed:
bash
```
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/nvidia-docker2_2.16.0-1_amd64.deb
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/nvidia-container-runtime_3.11.0-1_amd64.deb
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/nvidia-container-toolkit_1.12.0-1_amd64.deb
```
Note: The version numbers may change, so you might want to check the NVIDIA website for the latest versions.
Step 2: Install the Downloaded Packages
Install the downloaded packages:
Run the following command to install the packages:
bash
```
sudo dpkg -i nvidia-container-runtime_3.11.0-1_amd64.deb nvidia-container-toolkit_1.12.0-1_amd64.deb nvidia-docker2_2.16.0-1_amd64.deb
```
If you encounter any dependency issues, you can resolve them by running:
bash
```
sudo apt-get install -f
```
**Build Container**
bash
```
docker build --no-cache -t open-webui .
docker compose build && docker compose up -d
```

12
example.env Normal file
View File

@ -0,0 +1,12 @@
# Open WebUI environment variables
WEBUI_SECRET_KEY=zZzXE9XxOx2561sICfe2Oscf/3LVr4ZrnGvv+fcTqsZlsdakWYrZCt8z8Uesh9Vf
HOME=/app
OLLAMA_MODELS=/app/.ollama/models
OLLAMA_HOME=/app/.ollama
OLLAMA_API_BASE_URL=http://ollama:11434
HF_HOME=/app/.cache
NODE_OPTIONS=--max_old_space_size=8192
# NVIDIA GPU settings
NVIDIA_VISIBLE_DEVICES=all
NVIDIA_DRIVER_CAPABILITIES=compute,utility