Skip to content

nvmndme/langchain-demo

Repository files navigation

LangChain.js + Hono API Backend

A powerful backend API built with LangChain.js and Hono, demonstrating various LLM agent capabilities and use cases.

🚀 Features

Currently Implemented

  • Streaming Chat API - Real-time streaming responses from LLM
  • Memory-enabled Chat - Conversational memory with LangGraph
  • Ollama Integration - Local LLM support with configurable models
  • Hono Framework - Fast, lightweight web framework
  • TypeScript Support - Full type safety and IntelliSense

API Endpoints

Endpoint Method Description Request Body
/ GET Health check -
/llm/stream POST Streaming chat response {"message": "your question"}
/llm/memory POST Chat with conversation memory {"message": "your question"}

🛠️ Setup & Installation

Prerequisites

  • Bun (recommended) or Node.js 18+
  • Ollama installed and running locally

Installation

  1. Clone the repository

    git clone <your-repo-url>
    cd langchain
  2. Install dependencies

    bun install
  3. Set up environment variables Create a .env file in the root directory:

    MODEL_NAME=llama3.2:3b
    MODEL_URL=http://localhost:11434
    MODEL_TEMPERATURE=0.7
  4. Start Ollama and pull a model

    ollama serve
    ollama pull llama3.2:3b
  5. Run the development server

    bun run dev
  6. Access the API Open http://localhost:3000

📖 Usage Examples

Streaming Chat

curl -X POST http://localhost:3000/llm/stream \
  -H "Content-Type: application/json" \
  -d '{"message": "Explain quantum computing in simple terms"}'

Memory-enabled Chat

curl -X POST http://localhost:3000/llm/memory \
  -H "Content-Type: application/json" \
  -d '{"message": "What did we talk about earlier?"}'

JavaScript/TypeScript Client

// Streaming example
const response = await fetch('http://localhost:3000/llm/stream', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ message: 'Tell me a joke' })
});

const reader = response.body?.getReader();
const decoder = new TextDecoder();

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  console.log(decoder.decode(value));
}

🏗️ Project Structure

src/
├── index.ts          # Main Hono app and routes
└── llm/
    ├── model.ts      # LLM configuration (Ollama)
    ├── memory.ts     # Memory-enabled chat with LangGraph
    └── stream.ts     # Streaming chat implementation

🔮 Planned Features & Demos

Phase 1: Core LLM Capabilities

  • Document Q&A Agent - Upload and query documents
  • Code Assistant - Code generation and debugging
  • Multi-modal Support - Image and text processing
  • Tool Calling - Function calling and external API integration

Phase 2: Advanced Agents

  • Research Agent - Web search and information gathering
  • Data Analysis Agent - CSV/JSON data processing
  • Email Assistant - Email composition and management
  • Calendar Agent - Schedule management and planning

Phase 3: Production Features

  • Authentication & Rate Limiting
  • Database Integration - Persistent conversation storage
  • WebSocket Support - Real-time bidirectional communication
  • Agent Orchestration - Multi-agent workflows
  • Monitoring & Analytics - Performance tracking

Phase 4: Specialized Use Cases

  • Customer Support Bot - Ticket classification and responses
  • Content Generation - Blog posts, social media content
  • Translation Service - Multi-language support
  • Code Review Agent - Automated code analysis
  • Financial Analysis - Market data processing

🛠️ Development

Adding New Features

  1. Create new LLM module in src/llm/
  2. Add API endpoint in src/index.ts
  3. Update documentation and examples
  4. Add tests for new functionality

Environment Variables

Variable Description Default
MODEL_NAME Ollama model name llama3.2:3b
MODEL_URL Ollama server URL http://localhost:11434
MODEL_TEMPERATURE LLM temperature 0.7

🤝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments


Ready to build the future of AI-powered applications! 🚀

About

demo of various langchain js using hono api

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published