AI Providers

Setting up Claude, OpenAI, Ollama, and AWS Bedrock

AI Providers

NubiferAI supports multiple AI providers. You can switch between them at any time via the GUI header bar or CLI configuration.

Provider Comparison

ProviderCostAPI Key RequiredBest For
OllamaFreeNoTesting, privacy, offline use
ClaudePaidYesBest quality for infrastructure planning
OpenAIPaidYesGood alternative, wide model selection
BedrockPaidAWS credentialsTeams already on AWS

Ollama (Local) — Free

Run open-source LLMs locally on your machine. No API key, no internet connection required, completely private.

Setup

# 1. Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# 2. Pull a model (choose one)
ollama pull llama3.2        # General purpose, good balance
ollama pull mistral          # Fast, good for simple tasks
ollama pull codellama        # Optimized for code generation
ollama pull llama3.1         # Larger, higher quality

# 3. Verify it's running
ollama list

Ollama starts automatically and serves models at http://localhost:11434.

Configuration

GUI: Select "ollama" from the provider dropdown in the header bar. No API key needed.

CLI:

# Set in config
nubiferai config --init  # Select "ollama"

# Or via environment
export NUBIFERAI_PROVIDER=ollama

Config file (~/.config/nubiferai/config.toml):

provider = "ollama"

Available Models

ModelSizeNotes
llama3.23B/11BGood balance of speed and quality
llama3.18B/70BHigher quality, needs more RAM
mistral7BFast, good for simple tasks
codellama7B/13BSpecialized for code generation

Hardware Requirements

  • Minimum: 8GB RAM, any modern CPU (will be slow)
  • Recommended: 16GB RAM, GPU with 8GB+ VRAM
  • Best: 32GB+ RAM, GPU with 16GB+ VRAM (for larger models)

Tips

  • Start with llama3.2 — it's the best balance for most hardware
  • If responses are slow, try mistral (smaller, faster)
  • For Terraform/code generation specifically, try codellama
  • All data stays on your machine — nothing is sent to the cloud

Claude is the highest-quality option for infrastructure planning and cloud operations.

Setup

  1. Create an account at console.anthropic.com
  2. Navigate to API Keys and create a new key
  3. Copy the key (starts with sk-ant-...)

Configuration

GUI: Go to Settings > Providers, paste your key in the "Claude API Key" field.

CLI / Environment:

# Set for current session
export ANTHROPIC_API_KEY="sk-ant-your-key-here"

# Persist in shell profile
echo 'export ANTHROPIC_API_KEY="sk-ant-your-key-here"' >> ~/.bashrc
source ~/.bashrc

Config file (~/.config/nubiferai/config.toml):

provider = "claude"

Available Models

ModelIDNotes
Sonnet 4.5claude-sonnet-4-5-20250929Recommended — best balance of quality and speed
Opus 4.6claude-opus-4-6Highest quality, slower, more expensive
Haiku 4.5claude-haiku-4-5-20251001Fastest, cheapest, good for simple tasks

Tips

  • Sonnet 4.5 is the default and recommended for most tasks
  • Use Haiku for quick, simple operations (S3 buckets, basic configs)
  • Use Opus for complex architecture decisions and multi-service deployments

OpenAI

GPT-4o provides a good alternative to Claude with its own strengths.

Setup

  1. Create an account at platform.openai.com
  2. Navigate to API Keys and create a new secret key
  3. Copy the key (starts with sk-...)

Configuration

GUI: Go to Settings > Providers, paste your key in the "OpenAI API Key" field.

CLI / Environment:

export OPENAI_API_KEY="sk-your-key-here"

# Persist
echo 'export OPENAI_API_KEY="sk-your-key-here"' >> ~/.bashrc

Config file:

provider = "openai"

Available Models

ModelNotes
GPT-4oRecommended — fast and capable
GPT-4o MiniCheaper, good for simple tasks
o3-miniReasoning model, good for complex planning

AWS Bedrock

Access Claude models through your existing AWS infrastructure. Ideal for teams that need to keep API traffic within AWS.

Setup

  1. Sign in to the AWS Console
  2. Navigate to Amazon Bedrock > Model access
  3. Request access to Anthropic Claude models
  4. Wait for approval (usually immediate for Claude)
  5. Ensure your AWS credentials are configured

Configuration

AWS Credentials:

# Configure default profile
aws configure

# Or use a named profile
aws configure --profile nubiferai
export AWS_PROFILE=nubiferai

Config file:

provider = "bedrock"

Available Models

ModelBedrock ID
Sonnet 4.5anthropic.claude-sonnet-4-5-20250929-v1:0
Haiku 4.5anthropic.claude-haiku-4-5-20251001-v1:0

NubiferOS Integration

On NubiferOS, Bedrock credentials are managed automatically through the workspace configuration. No manual AWS credential setup is needed — NubiferOS handles profile selection, region, and account access.


Switching Providers

In the GUI

Use the provider and model dropdowns in the header bar. Changes take effect immediately and are saved to your config file.

In the CLI

# One-time override
nubiferai nucleate --provider openai "Deploy a Lambda function"

# Change default
nubiferai config --init

Testing Your Connection

GUI: Go to Settings > Providers > Connection Test, click Test.

CLI:

nubiferai status

This will show your current provider, model, and whether the connection is working.

Troubleshooting

"No API key configured"

Set the appropriate environment variable or enter the key in Settings > Providers.

"Connection refused" (Ollama)

# Check if Ollama is running
systemctl status ollama

# Start it
ollama serve

"Model not found" (Ollama)

# List installed models
ollama list

# Pull the missing model
ollama pull llama3.2

"Access denied" (Bedrock)

  • Verify model access is enabled in the AWS Console
  • Check your AWS credentials: aws sts get-caller-identity
  • Ensure the IAM role has bedrock:InvokeModel permission

"Rate limited" (Claude/OpenAI)

  • Wait a moment and retry
  • Consider using a smaller model (Haiku/GPT-4o Mini) for lower rate limits
  • Check your API usage dashboard for quota information