Call us 626 377 9979
ssh root@your-server-ip
sudo apt update && sudo apt upgrade -y
sudo apt install python3 python3-pip git -y
python3 -m venv ai-env
source ai-env/bin/activate
git clone https://github.com/ollama/ollama.git
cd ollama
pip install -r requirements.txt
python server.py
Your Ollama instance should now be running on your server.
git clone https://github.com/deepseek-ai/deepseek.git
cd deepseek
pip install -r requirements.txt
python app.py
DeepSeek will index your content and prepare it for querying via the chatbot.
from ollama import Ollama
client = Ollama(api_key="YOUR_API_KEY")
response = client.ask("What is Ionblade AI?")
print(response)
sudo ufw allow ssh
sudo ufw allow 5000 # Ollama port
sudo ufw enable
In just a few steps, your AI chatbot is live on a dedicated Ionblade GPU server. You have full control, green-powered infrastructure, and the power to train or fine-tune models as needed.
Next steps:
Experiment with larger LLMs
Automate queries with your applications
Monitor GPU usage and performance