Lyra

Online
AI Integration for Developers: Adding Intelligence to Your Apps (2023 Guide)
AI/ML | Software Development 11 min read

AI Integration for Developers: Adding Intelligence to Your Apps (2023 Guide)

S
Squalltec Team March 14, 2023

The AI Revolution is Here (March 2023)

What Changed in 2023:

Before (2022):

  • AI/ML = PhD required
  • Complex model training
  • Expensive infrastructure
  • Limited use cases

Now (2023):

  • AI = API call away
  • Pre-trained models (GPT-4, ChatGPT)
  • Accessible to all developers
  • Unlimited use cases

The Shift:

ChatGPT launched November 2022. Everything changed.

Real Impact:

  • 100M users in 2 months (fastest growing app ever)
  • Developers integrating AI everywhere
  • Every app can be “AI-powered”

This guide: Practical AI integration for real applications.

AI Use Cases for Regular Apps

What You Can Build:

1. Content Generation

  • Blog posts, product descriptions
  • Marketing copy, emails
  • Social media content
  • Code documentation

2. Intelligent Search

  • Semantic search (understand meaning)
  • Q&A over your documents
  • Customer support chatbots
  • Internal knowledge base

3. Data Analysis

  • Extract insights from text
  • Sentiment analysis
  • Classification & categorization
  • Summarization

4. Code Assistance

  • Code generation
  • Bug detection
  • Code review
  • Documentation

5. Personalization

  • Product recommendations
  • Content recommendations
  • Personalized emails
  • Dynamic UI

Let’s build these.

OpenAI API Basics

Getting Started:

# Sign up: platform.openai.com
# Get API key

# Install SDK
npm install openai

First API Call:

const { Configuration, OpenAIApi } = require('openai');

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});

const openai = new OpenAIApi(configuration);

async function generateText(prompt) {
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    messages: [
      { role: 'user', content: prompt }
    ],
    temperature: 0.7,
    max_tokens: 500
  });
  
  return response.data.choices[0].message.content;
}

// Usage
const result = await generateText('Write a product description for wireless headphones');
console.log(result);

Models Available (March 2023):

ModelCostSpeedQualityUse Case
gpt-4$$$SlowBestComplex reasoning
gpt-3.5-turbo$FastGoodMost tasks
text-davinci-003$$MediumVery GoodCreative writing
text-embedding-ada-002$FastN/AEmbeddings

Recommendation: Start with gpt-3.5-turbo (ChatGPT model).

Building an AI-Powered Chatbot

Use Case: Customer support chatbot

Features:

  • Answers questions about products
  • Handles orders
  • Escalates to human if needed

Implementation:

const express = require('express');
const { Configuration, OpenAIApi } = require('openai');

const app = express();
app.use(express.json());

const openai = new OpenAIApi(new Configuration({
  apiKey: process.env.OPENAI_API_KEY
}));

// Store conversation history
const conversations = new Map();

app.post('/api/chat', async (req, res) => {
  const { userId, message } = req.body;
  
  // Get or create conversation history
  if (!conversations.has(userId)) {
    conversations.set(userId, [
      {
        role: 'system',
        content: `You are a helpful customer support agent for an e-commerce store. 
                  Be friendly, concise, and helpful. 
                  If you can't answer something, suggest contacting human support.
                  Our products include: electronics, clothing, home goods.
                  Shipping takes 3-5 business days.
                  Return policy: 30 days.`
      }
    ]);
  }
  
  const history = conversations.get(userId);
  
  // Add user message
  history.push({
    role: 'user',
    content: message
  });
  
  try {
    // Get AI response
    const response = await openai.createChatCompletion({
      model: 'gpt-3.5-turbo',
      messages: history,
      temperature: 0.7,
      max_tokens: 200
    });
    
    const aiMessage = response.data.choices[0].message.content;
    
    // Add AI response to history
    history.push({
      role: 'assistant',
      content: aiMessage
    });
    
    // Keep only last 10 messages (manage token limit)
    if (history.length > 11) {  // System message + 10 messages
      history.splice(1, history.length - 11);
    }
    
    res.json({ message: aiMessage });
    
  } catch (error) {
    console.error('OpenAI error:', error);
    res.status(500).json({ error: 'Failed to get response' });
  }
});

app.listen(3000);

Frontend:

async function sendMessage(message) {
  const response = await fetch('/api/chat', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      userId: getCurrentUserId(),
      message: message
    })
  });
  
  const data = await response.json();
  displayMessage(data.message);
}

Semantic Search with Embeddings

Problem: Traditional keyword search misses meaning

Example:

  • Search: “comfortable office chair”
  • Miss: “ergonomic seating” (different words, same meaning)

Solution: Embeddings + Vector Search

What are Embeddings? Numbers representing meaning:

  • Similar text → Similar numbers
  • “happy” and “joyful” → Close in vector space
  • “happy” and “sad” → Far apart

Implementation:

Step 1: Generate Embeddings for Your Content

const { Configuration, OpenAIApi } = require('openai');
const openai = new OpenAIApi(new Configuration({
  apiKey: process.env.OPENAI_API_KEY
}));

async function generateEmbedding(text) {
  const response = await openai.createEmbedding({
    model: 'text-embedding-ada-002',
    input: text
  });
  
  return response.data.data[0].embedding;  // Array of 1536 numbers
}

// Generate embeddings for all products
const products = [
  { id: 1, name: 'Ergonomic Office Chair', description: 'Comfortable seating...' },
  { id: 2, name: 'Standing Desk', description: 'Adjustable height desk...' },
  // ... more products
];

for (const product of products) {
  const text = `${product.name} ${product.description}`;
  product.embedding = await generateEmbedding(text);
  
  // Save to database
  await db.query(
    'UPDATE products SET embedding = $1 WHERE id = $2',
    [JSON.stringify(product.embedding), product.id]
  );
}

Step 2: Search Using Embeddings

async function semanticSearch(query) {
  // Generate embedding for search query
  const queryEmbedding = await generateEmbedding(query);
  
  // Find similar products using cosine similarity
  const results = await db.query(`
    SELECT 
      id, 
      name, 
      description,
      1 - (embedding <=> $1::vector) AS similarity
    FROM products
    ORDER BY similarity DESC
    LIMIT 10
  `, [JSON.stringify(queryEmbedding)]);
  
  return results.rows;
}

// Usage
const results = await semanticSearch('comfortable office chair');
// Returns: Ergonomic chairs, office seating, desk chairs, etc.

Requires: PostgreSQL with pgvector extension or vector database (Pinecone, Weaviate).

Vector Databases

Popular Options:

1. Pinecone (Easiest)

const { PineconeClient } = require('@pinecone-database/pinecone');

const pinecone = new PineconeClient();
await pinecone.init({
  apiKey: process.env.PINECONE_API_KEY,
  environment: 'us-west1-gcp'
});

// Create index
await pinecone.createIndex({
  name: 'products',
  dimension: 1536,  // OpenAI embedding size
  metric: 'cosine'
});

const index = pinecone.Index('products');

// Upsert embeddings
await index.upsert([
  {
    id: '1',
    values: productEmbedding,
    metadata: {
      name: 'Ergonomic Chair',
      price: 299
    }
  }
]);

// Search
const results = await index.query({
  vector: queryEmbedding,
  topK: 10,
  includeMetadata: true
});

2. Weaviate (Open Source)

const weaviate = require('weaviate-ts-client');

const client = weaviate.client({
  scheme: 'http',
  host: 'localhost:8080'
});

// Store
await client.data.creator()
  .withClassName('Product')
  .withProperties({
    name: 'Ergonomic Chair',
    description: '...'
  })
  .withVector(productEmbedding)
  .do();

// Search
const result = await client.graphql
  .get()
  .withClassName('Product')
  .withNearVector({ vector: queryEmbedding })
  .withLimit(10)
  .withFields('name description')
  .do();

Building Q&A Over Your Documents

Use Case: Chat with your PDF documents

Architecture:

  1. Extract text from PDFs
  2. Split into chunks
  3. Generate embeddings
  4. Store in vector DB
  5. Query: Find relevant chunks
  6. Send to GPT with context

Implementation:

const pdf = require('pdf-parse');
const { OpenAIApi, Configuration } = require('openai');
const { PineconeClient } = require('@pinecone-database/pinecone');

const openai = new OpenAIApi(new Configuration({
  apiKey: process.env.OPENAI_API_KEY
}));

// Step 1: Extract and chunk text
async function processDocument(pdfBuffer) {
  const data = await pdf(pdfBuffer);
  const text = data.text;
  
  // Split into chunks (1000 chars each)
  const chunks = [];
  for (let i = 0; i < text.length; i += 1000) {
    chunks.push(text.slice(i, i + 1000));
  }
  
  return chunks;
}

// Step 2: Generate embeddings and store
async function indexDocument(chunks, documentId) {
  const pinecone = new PineconeClient();
  await pinecone.init({ apiKey: process.env.PINECONE_API_KEY });
  const index = pinecone.Index('documents');
  
  for (let i = 0; i < chunks.length; i++) {
    const embedding = await generateEmbedding(chunks[i]);
    
    await index.upsert([{
      id: `${documentId}-chunk-${i}`,
      values: embedding,
      metadata: {
        documentId,
        chunkIndex: i,
        text: chunks[i]
      }
    }]);
  }
}

// Step 3: Query with AI
async function askQuestion(question) {
  // Find relevant chunks
  const questionEmbedding = await generateEmbedding(question);
  
  const pinecone = new PineconeClient();
  await pinecone.init({ apiKey: process.env.PINECONE_API_KEY });
  const index = pinecone.Index('documents');
  
  const results = await index.query({
    vector: questionEmbedding,
    topK: 3,
    includeMetadata: true
  });
  
  // Get relevant text
  const context = results.matches
    .map(match => match.metadata.text)
    .join('\n\n');
  
  // Ask GPT with context
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    messages: [
      {
        role: 'system',
        content: 'Answer questions based on the provided context. If the answer is not in the context, say "I don\'t have enough information to answer that."'
      },
      {
        role: 'user',
        content: `Context:\n${context}\n\nQuestion: ${question}`
      }
    ],
    temperature: 0.3  // Lower temperature for factual answers
  });
  
  return response.data.choices[0].message.content;
}

// Usage
const pdfBuffer = fs.readFileSync('document.pdf');
const chunks = await processDocument(pdfBuffer);
await indexDocument(chunks, 'doc-123');

const answer = await askQuestion('What is the return policy?');
console.log(answer);

Function Calling (ChatGPT Plugins)

Use Case: Let AI interact with your systems

Example: AI that can check inventory, place orders

const functions = [
  {
    name: 'check_inventory',
    description: 'Check if a product is in stock',
    parameters: {
      type: 'object',
      properties: {
        productId: {
          type: 'string',
          description: 'The product ID'
        }
      },
      required: ['productId']
    }
  },
  {
    name: 'place_order',
    description: 'Place an order for a product',
    parameters: {
      type: 'object',
      properties: {
        productId: { type: 'string' },
        quantity: { type: 'number' }
      },
      required: ['productId', 'quantity']
    }
  }
];

async function chat(message) {
  const response = await openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    messages: [{ role: 'user', content: message }],
    functions: functions,
    function_call: 'auto'
  });
  
  const responseMessage = response.data.choices[0].message;
  
  // Check if AI wants to call a function
  if (responseMessage.function_call) {
    const functionName = responseMessage.function_call.name;
    const functionArgs = JSON.parse(responseMessage.function_call.arguments);
    
    // Execute the function
    let functionResult;
    if (functionName === 'check_inventory') {
      functionResult = await checkInventory(functionArgs.productId);
    } else if (functionName === 'place_order') {
      functionResult = await placeOrder(functionArgs.productId, functionArgs.quantity);
    }
    
    // Send function result back to AI
    const secondResponse = await openai.createChatCompletion({
      model: 'gpt-3.5-turbo',
      messages: [
        { role: 'user', content: message },
        responseMessage,
        {
          role: 'function',
          name: functionName,
          content: JSON.stringify(functionResult)
        }
      ]
    });
    
    return secondResponse.data.choices[0].message.content;
  }
  
  return responseMessage.content;
}

// Usage
const response = await chat('Do you have product ABC123 in stock?');
// AI calls check_inventory function automatically
// Returns: "Yes, we have 15 units of product ABC123 in stock."

Cost Management

OpenAI Pricing (March 2023):

ModelInputOutput
GPT-4$0.03/1K tokens$0.06/1K tokens
GPT-3.5-turbo$0.002/1K tokens$0.002/1K tokens
Embeddings$0.0001/1K tokensN/A

Cost Optimization:

1. Use Cheaper Models When Possible

// Simple tasks: gpt-3.5-turbo
// Complex reasoning: gpt-4

async function generateResponse(prompt, complexity = 'simple') {
  const model = complexity === 'complex' ? 'gpt-4' : 'gpt-3.5-turbo';
  
  return await openai.createChatCompletion({
    model,
    messages: [{ role: 'user', content: prompt }]
  });
}

2. Cache Responses

const cache = new Map();

async function cachedGeneration(prompt) {
  if (cache.has(prompt)) {
    return cache.get(prompt);
  }
  
  const response = await generateText(prompt);
  cache.set(prompt, response);
  
  return response;
}

3. Reduce Token Usage

// Limit conversation history
if (history.length > 10) {
  history = [history[0], ...history.slice(-9)];  // Keep system message + last 9
}

// Use lower max_tokens
await openai.createChatCompletion({
  model: 'gpt-3.5-turbo',
  messages: history,
  max_tokens: 150  // Limit response length
});

4. Monitor Usage

const { Configuration, OpenAIApi } = require('openai');

// Wrapper to track costs
class CostTrackingOpenAI {
  constructor(apiKey) {
    this.openai = new OpenAIApi(new Configuration({ apiKey }));
    this.totalCost = 0;
  }
  
  async createChatCompletion(params) {
    const response = await this.openai.createChatCompletion(params);
    
    // Calculate cost
    const usage = response.data.usage;
    const cost = (usage.prompt_tokens * 0.002 + usage.completion_tokens * 0.002) / 1000;
    
    this.totalCost += cost;
    console.log(`Request cost: $${cost.toFixed(4)}, Total: $${this.totalCost.toFixed(4)}`);
    
    return response;
  }
  
  getTotalCost() {
    return this.totalCost;
  }
}

Error Handling & Rate Limits

Rate Limits (March 2023):

  • GPT-3.5-turbo: 3,500 requests/min
  • GPT-4: 200 requests/min
  • Embeddings: 3,000 requests/min

Handle Rate Limits:

async function retryWithBackoff(fn, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await fn();
    } catch (error) {
      if (error.response?.status === 429) {  // Rate limit
        const delay = Math.pow(2, i) * 1000;  // Exponential backoff
        console.log(`Rate limited. Retrying in ${delay}ms...`);
        await new Promise(resolve => setTimeout(resolve, delay));
      } else {
        throw error;
      }
    }
  }
  throw new Error('Max retries exceeded');
}

// Usage
const response = await retryWithBackoff(() => 
  openai.createChatCompletion({
    model: 'gpt-3.5-turbo',
    messages: [{ role: 'user', content: prompt }]
  })
);

Production Best Practices

1. Stream Responses (Better UX)

const response = await openai.createChatCompletion({
  model: 'gpt-3.5-turbo',
  messages: [{ role: 'user', content: prompt }],
  stream: true
}, { responseType: 'stream' });

response.data.on('data', (chunk) => {
  const lines = chunk.toString().split('\n').filter(line => line.trim() !== '');
  
  for (const line of lines) {
    const message = line.replace(/^data: /, '');
    if (message === '[DONE]') return;
    
    try {
      const parsed = JSON.parse(message);
      const content = parsed.choices[0].delta?.content;
      if (content) {
        process.stdout.write(content);  // Stream to user
      }
    } catch {}
  }
});

2. Content Moderation

async function moderateContent(text) {
  const response = await openai.createModeration({
    input: text
  });
  
  const results = response.data.results[0];
  
  if (results.flagged) {
    console.log('Content flagged:', results.categories);
    throw new Error('Content violates usage policies');
  }
  
  return true;
}

// Use before generating
await moderateContent(userInput);
const response = await generateText(userInput);

3. Prompt Engineering

// Bad prompt
const prompt = 'Write blog post';

// Good prompt
const prompt = `Write a 500-word blog post about sustainable fashion.

Audience: Young adults interested in ethical consumption
Tone: Informative yet conversational
Include:
- 3 key benefits of sustainable fashion
- 2 actionable tips for consumers
- 1 surprising statistic

Format: Introduction, 3 sections with headers, conclusion`;

4. User Privacy

// Anonymize user data
function anonymizeData(data) {
  return data
    .replace(/\b[\w\.-]+@[\w\.-]+\.\w{2,4}\b/gi, '[EMAIL]')
    .replace(/\b\d{3}-\d{2}-\d{4}\b/g, '[SSN]')
    .replace(/\b\d{4}-\d{4}-\d{4}-\d{4}\b/g, '[CREDIT_CARD]');
}

const sanitizedInput = anonymizeData(userInput);
const response = await openai.createChatCompletion({
  model: 'gpt-3.5-turbo',
  messages: [{ role: 'user', content: sanitizedInput }]
});

Real-World Use Cases

1. Content Generation Platform

// Generate product descriptions
async function generateProductDescription(product) {
  const prompt = `Generate a compelling product description for:
  
  Product: ${product.name}
  Category: ${product.category}
  Features: ${product.features.join(', ')}
  Price: $${product.price}
  
  Write 2-3 paragraphs highlighting benefits and unique features.
  Use persuasive language that appeals to ${product.targetAudience}.`;
  
  return await generateText(prompt);
}

2. Intelligent Customer Support

// Classify support tickets
async function classifyTicket(ticket) {
  const prompt = `Classify this support ticket into one category:
  
  Categories: billing, technical, shipping, returns, general
  
  Ticket: "${ticket}"
  
  Respond with only the category name.`;
  
  const category = await generateText(prompt);
  
  // Route to appropriate team
  await routeToTeam(category, ticket);
}

3. Code Review Assistant

async function reviewCode(code) {
  const prompt = `Review this code and provide feedback:
  
  \`\`\`javascript
  ${code}
  \`\`\`
  
  Check for:
  1. Bugs and errors
  2. Security vulnerabilities
  3. Performance issues
  4. Code quality and readability
  
  Provide specific, actionable suggestions.`;
  
  return await generateText(prompt);
}

Conclusion: AI is Now Accessible

March 2023 Reality:

  • AI integration is easy (API calls)
  • Pre-trained models are powerful
  • Use cases are limitless
  • Every developer can build AI features

Getting Started:

  1. Week 1: Basic OpenAI API calls
  2. Week 2: Build a chatbot
  3. Week 3: Implement semantic search
  4. Week 4: Q&A over documents
  5. Month 2: Function calling, advanced features

Cost: $10-100/month for most applications

Don’t wait. Integrate AI now. Your competitors are.

Key Takeaways:

  1. AI integration is now accessible through simple API calls
  2. GPT-3.5-turbo is fast and affordable for most use cases
  3. Embeddings enable semantic search and understanding
  4. Vector databases store and search embeddings efficiently
  5. Q&A over documents combines embeddings + GPT
  6. Function calling lets AI interact with your systems
  7. Cost management through caching and model selection
  8. Rate limiting requires exponential backoff retry logic
  9. Content moderation prevents policy violations
  10. Good prompts dramatically improve AI output quality

Ready to add AI to your application?

We’ve integrated AI into 50+ applications. Free AI architecture consultation available.

[Schedule AI Integration Consultation →]

END OF BATCH 3 - ALL 15 POSTS COMPLETE!