Tip
Starcoder2 3B model option coming soon to support workstations with limited resources.
tlm is your CLI companion which requires nothing except your workstation. It uses most efficient and powerful CodeLLaMa in your local environment to provide you the best possible command line suggestions.
-
πΈ No API Key (Subscription) is required. (ChatGPT, Github Copilot, Azure OpenAI, etc.)
-
π‘ No internet connection is required.
-
π» Works on macOS, Linux and Windows.
-
π©π»βπ» Automatic shell detection. (Powershell, Bash, Zsh)
-
π One liner generation and command explanation.
Installation can be done in two ways;
- Installation script (recommended)
- Go Install
Ollama is needed to download to necessary models. It can be downloaded with the following methods on different platforms.
- On macOs and Windows;
Download instructions can be followed at the following link: https://ollama.com/download
- On Linux;
curl -fsSL https://ollama.com/install.sh | sh
- Or using official Docker images π³;
# CPU Only
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# With GPU (Nvidia & AMD)
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Installation script is the recommended way to install tlm. It will recognize the which platform and architecture to download and will execute install command for you.
Download and execute the installation script by using the following command;
curl -fsSL https://raw.githubusercontent.com/yusufcanb/tlm/1.1/install.sh | sudo -E bash
Download and execute the installation script by using the following command;
Invoke-RestMethod -Uri https://raw.githubusercontent.com/yusufcanb/tlm/1.1/install.ps1 | Invoke-Expression
If you have Go 1.21 or higher installed on your system, you can easily use the following command to install tlm;
go install github.com/yusufcanb/tlm@latest
Then, deploy tlm modelfiles.
π Note: If you have Ollama deployed on somewhere else. Please first run
tlm config
and configure Ollama host.
tlm deploy
Check installation by using the following command;
tlm help
On Linux and macOS;
rm /usr/local/bin/tlm
On Windows;
Remove-Item -Recurse -Force "C:\Users\$env:USERNAME\AppData\Local\Programs\tlm"