Core LangChain.js abstractions and schemas
@langchain/core is the foundational package of the LangChain.js ecosystem, providing interfaces and base classes for building applications powered by large language models. It defines the core abstractions that enable developers to work with LLMs, chat models, embeddings, vector stores, retrievers, and document loaders through consistent, provider-agnostic APIs. Rather than containing specific integrations, it establishes the contracts that other packages implement, allowing you to swap between OpenAI, Anthropic, Google, or custom providers without rewriting application logic.
The package emerged from LangChain's architectural evolution toward modular design. Originally, all functionality lived in a monolithic langchain package, but as the ecosystem grew, the team separated core abstractions into @langchain/core while moving integrations to provider-specific packages like @langchain/openai and @langchain/anthropic. This separation enables independent versioning, reduces bundle sizes, and allows third-party tools to build on LangChain's primitives without inheriting heavy dependencies.
Most developers don't install @langchain/core directly—it comes as a dependency when you install langchain or provider packages. However, understanding its structure helps you architect cleaner applications. The package includes TypeScript-native implementations with Zod schemas for validation, streaming support via async iterators, and callback systems for observability. With 2.7 million weekly downloads, it powers production applications ranging from customer support chatbots to document analysis pipelines.
The core abstractions include message types (HumanMessage, AIMessage, SystemMessage), prompt templates that handle variable interpolation, output parsers that transform LLM responses into structured data, and runnable interfaces that enable method chaining with the pipe operator. These primitives compose into chains, agents, and complex workflows without requiring deep framework knowledge.
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { StringOutputParser } from '@langchain/core/output_parsers';
import { RunnableSequence } from '@langchain/core/runnables';
import { ChatOpenAI } from '@langchain/openai';
// Define a reusable prompt template
const promptTemplate = ChatPromptTemplate.fromMessages([
['system', 'You are a helpful assistant that explains technical concepts concisely.'],
['human', 'Explain {concept} in under 50 words.']
]);
// Create a chain: prompt -> model -> string output
const model = new ChatOpenAI({ modelName: 'gpt-3.5-turbo', temperature: 0.7 });
const outputParser = new StringOutputParser();
const chain = RunnableSequence.from([
promptTemplate,
model,
outputParser
]);
// Invoke the chain with variables
const result = await chain.invoke({ concept: 'closure in JavaScript' });
console.log(result);
// Output: "A closure is a function that retains access to variables from its outer scope..."
// Stream responses token-by-token
const stream = await chain.stream({ concept: 'async/await' });
for await (const chunk of stream) {
process.stdout.write(chunk);
}Multi-Provider Chat Applications: Build chatbots that can switch between OpenAI's GPT-4, Anthropic's Claude, or local models without changing business logic. The unified ChatModel interface ensures consistent handling of streaming responses, token counting, and message history regardless of provider.
Retrieval-Augmented Generation (RAG) Pipelines: Chain together document retrievers, embedding models, and LLMs to answer questions from your data. The core abstractions for VectorStore and Retriever let you swap between Pinecone, Weaviate, or local FAISS indexes while keeping pipeline code unchanged.
Structured Data Extraction: Use output parsers to reliably extract JSON, arrays, or Zod-validated objects from unstructured LLM responses. The StructuredOutputParser and JsonOutputParser handle schema validation and retry logic when models return malformed data.
Prompt Engineering Systems: Create reusable prompt templates with typed variables using ChatPromptTemplate and MessagesPlaceholder. Manage few-shot examples, system instructions, and user inputs as composable units that teams can version control and test independently.
Agent Workflows with Tool Calling: Define custom tools (functions the LLM can invoke) using the core Tool abstraction, then compose them into agents that reason about multi-step tasks. The standardized tool schema works across providers that support function calling like OpenAI and Anthropic.
AI SDK by Vercel - The AI Toolkit for TypeScript and JavaScript
<p align="center"> <img height="100" width="100" alt="LlamaIndex logo" src="https://ts.llamaindex.ai/square.svg" /> </p> <h1 align="center">LlamaIndex.TS</h1> <h3 align="center"> Data framework for your LLM application. </h3>
The official TypeScript library for the OpenAI API
npm install @langchain/corepnpm add @langchain/corebun add @langchain/core