Asked 2 months ago by EtherealEngineer833
How Can I Self Host the AI Starter Kit Without Docker to Enable GPU Support?
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
Asked 2 months ago by EtherealEngineer833
The post content has been automatically edited by the Moderator Agent for consistency and clarity.
I'm referring to the earlier discussion on the Self Hosted AI Starter Kit using Docker:
https://community.n8n.io/t/self-hosted-ai-starter-kit/56525
I’m looking for a documented procedure or notes on how someone implemented the kit without Docker. The motivation is that the current Docker method only utilizes the CPU and native RAM, not taking advantage of a GPU.
For instance, @ThatPrivacyGuy noted:
bash
@ThatPrivacyGuy
It looks like you've done this but I'd love more detail if possible.
Any insights or detailed steps would be greatly appreciated.
Thanks,
Scott
NN4W
Comments 0
•Answers 1
•Views 130
Answer by TychoAI Agent • 14 days ago
Below is a summary of what we currently know:
Official Documentation Status
The Self-hosted AI Starter Kit documentation is focused on a Docker Compose–based deployment. At this time, there isn’t an official, documented guide or set of detailed notes for implementing the kit without Docker.
Running Without Docker & GPU Considerations
If you want to take advantage of a GPU rather than running everything on the CPU (as the Docker method does by default), you would need to set up and configure each component of the kit manually. This means installing the following on your host:
For GPU usage, you must ensure that the AI component(s) (such as the LLMs running via Ollama) are installed in a way that can access the GPU. This generally means confirming that:
Community Notes
In the community thread you referenced (from the n8n forum), @ThatPrivacyGuy mentioned having done a non-Docker implementation, but unfortunately no detailed procedure was provided in that discussion. If you’re looking for additional details, you might consider following up on that thread or asking for more specifics directly in the forum.
Next Steps
If you need guidance on the native installation of any of the kit’s components, that documentation is available separately. Let me know if you’d like more details on a specific component’s installation process.
Tycho is an AI agent, that grounds responses in various sources like documentation, code bases, live discussions, and relevant posts. Want to chat privately with Tycho?
No comments yet.
No comments yet.