anthropic_tools
This API is deprecated as Anthropic now officially supports tools. Click here to read the documentation.
Anthropic Tools
LangChain offers an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions.
Setup
To start, install the @langchain/anthropic
integration package.
- npm
- Yarn
- pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
Initialize model
You can initialize this wrapper the same way you'd initialize a standard ChatAnthropic
instance:
We're unifying model params across all packages. We now suggest using model
instead of modelName
, and apiKey
for API keys.
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
const model = new ChatAnthropicTools({
temperature: 0.1,
model: "claude-3-sonnet-20240229",
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY
});
Passing in tools
You can now pass in tools the same way as OpenAI:
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatAnthropicTools({
temperature: 0.1,
model: "claude-3-sonnet-20240229",
}).bind({
tools: [
{
type: "function",
function: {
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["location"],
},
},
},
],
// You can set the `function_call` arg to force the model to use a function
tool_choice: {
type: "function",
function: {
name: "get_current_weather",
},
},
});
const response = await model.invoke([
new HumanMessage({
content: "What's the weather in Boston?",
}),
]);
console.log(response);
/*
AIMessage {
lc_serializable: true,
lc_kwargs: { content: '', additional_kwargs: { tool_calls: [Array] } },
lc_namespace: [ 'langchain_core', 'messages' ],
content: '',
name: undefined,
additional_kwargs: { tool_calls: [ [Object] ] }
}
*/
console.log(response.additional_kwargs.tool_calls);
/*
[
{
id: '0',
type: 'function',
function: {
name: 'get_current_weather',
arguments: '{"location":"Boston, MA","unit":"fahrenheit"}'
}
}
]
*/
API Reference:
- ChatAnthropicTools from
@langchain/anthropic/experimental
- HumanMessage from
@langchain/core/messages
Parallel tool calling
The model may choose to call multiple tools. Here is an example using an extraction use-case:
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
import { PromptTemplate } from "@langchain/core/prompts";
import { JsonOutputToolsParser } from "@langchain/core/output_parsers/openai_tools";
const EXTRACTION_TEMPLATE = `Extract and save the relevant entities mentioned in the following passage together with their properties.
Passage:
{input}
`;
const prompt = PromptTemplate.fromTemplate(EXTRACTION_TEMPLATE);
// Use Zod for easier schema declaration
const schema = z.object({
name: z.string().describe("The name of a person"),
height: z.number().describe("The person's height"),
hairColor: z.optional(z.string()).describe("The person's hair color"),
});
const model = new ChatAnthropicTools({
temperature: 0.1,
model: "claude-3-sonnet-20240229",
}).bind({
tools: [
{
type: "function",
function: {
name: "person",
description: "Extracts the relevant people from the passage.",
parameters: zodToJsonSchema(schema),
},
},
],
// Can also set to "auto" to let the model choose a tool
tool_choice: {
type: "function",
function: {
name: "person",
},
},
});
// Use a JsonOutputToolsParser to get the parsed JSON response directly.
const chain = await prompt.pipe(model).pipe(new JsonOutputToolsParser());
const response = await chain.invoke({
input:
"Alex is 5 feet tall. Claudia is 1 foot taller than Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.",
});
console.log(JSON.stringify(response, null, 2));
/*
[
{
"type": "person",
"args": {
"name": "Alex",
"height": 5,
"hairColor": "blonde"
}
},
{
"type": "person",
"args": {
"name": "Claudia",
"height": 6,
"hairColor": "brunette"
}
}
]
*/
API Reference:
- ChatAnthropicTools from
@langchain/anthropic/experimental
- PromptTemplate from
@langchain/core/prompts
- JsonOutputToolsParser from
@langchain/core/output_parsers/openai_tools
.withStructuredOutput({ ... })
The .withStructuredOutput
method is in beta. It is actively being worked on, so the API may change.
Using the .withStructuredOutput
method, you can make the LLM return structured output, given only a Zod or JSON schema:
import { z } from "zod";
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute"),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
const model = new ChatAnthropicTools({
model: "claude-3-sonnet-20240229",
temperature: 0.1,
});
// Pass the schema and tool name to the withStructuredOutput method
const modelWithTool = model.withStructuredOutput(calculatorSchema);
// You can also set force: false to allow the model scratchpad space.
// This may improve reasoning capabilities.
// const modelWithTool = model.withStructuredOutput(calculatorSchema, {
// force: false,
// });
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);
// Chain your prompt and model together
const chain = prompt.pipe(modelWithTool);
const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(response);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/
API Reference:
- ChatAnthropicTools from
@langchain/anthropic/experimental
- ChatPromptTemplate from
@langchain/core/prompts
Using JSON schema:
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const calculatorJsonSchema = {
type: "object",
properties: {
operation: {
type: "string",
enum: ["add", "subtract", "multiply", "divide"],
description: "The type of operation to execute.",
},
number1: { type: "number", description: "The first number to operate on." },
number2: {
type: "number",
description: "The second number to operate on.",
},
},
required: ["operation", "number1", "number2"],
description: "A simple calculator tool",
};
const model = new ChatAnthropicTools({
model: "claude-3-sonnet-20240229",
temperature: 0.1,
});
// Pass the schema and optionally, the tool name to the withStructuredOutput method
const modelWithTool = model.withStructuredOutput(calculatorJsonSchema, {
name: "calculator",
});
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);
// Chain your prompt and model together
const chain = prompt.pipe(modelWithTool);
const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(response);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/
API Reference:
- ChatAnthropicTools from
@langchain/anthropic/experimental
- ChatPromptTemplate from
@langchain/core/prompts
Related
- Chat model conceptual guide
- Chat model how-to guides