Skip to content

Contextual AI: How It Started Understanding Us Like Humans

Abdulrahman Alfulayt
Abdulrahman Alfulayt
2 min read
Contextual AI: How It Started Understanding Us Like Humans

"Tell it to send the email we talked about yesterday, move the meeting to next week, and include the team name."

That’s a super simple instruction… for us humans.

But say it to a traditional AI assistant, and you’ll get: "Which email? What meeting? What team?"

This is where contextual AI comes into play.


What is Contextual AI?

It’s an AI that tries to understand the context behind what you're saying instead of treating every input as a standalone message.

Context includes:

  • What you said earlier.
  • Your intent.
  • Your preferences.
  • Prior conversations and interactions.

This type of AI doesn’t just look at your message as raw text, but as part of an ongoing story or scene.


Why Is This Important?

Because it paves the way for agents that:

  • Hold ongoing conversations.
  • Understand vague instructions.
  • Execute multi-step tasks without repeating details.

Types of Context an Agent Can Handle

1. Chat History Context

Remembers what was said earlier so the agent doesn’t lose track.

2. User Context

Like your name, preferences, and current projects.

3. Task Context

For example: "You’re responsible for managing Abdulrahman’s appointments only."

4. External Context

Such as reading from your calendar, email, or files before replying.


How to Teach an Agent to Handle Context

Practical Example Using LangChain:

from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain

# 1. LLM initialization
llm = ChatOpenAI(temperature=0)

# 2. Set up short-term memory
memory = ConversationBufferMemory()

# 3. Link memory to the model
conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=True
)

# 4. Run conversation
conversation.predict(input="I want to cancel next week's appointment")
conversation.predict(input="Okay, book it for Tuesday instead")

What’s Happening Here?

  • ConversationBufferMemory stores the dialogue.
  • ConversationChain injects that history into each prompt.
  • The model replies with awareness of what was previously said.

How to Add Long-Term Memory

Long-term memory is useful for remembering:

  • User data.
  • Completed projects.
  • Preferences that don’t change often.

Example with LangChain + FAISS:

from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.memory import VectorStoreRetrieverMemory

# Build the long-term memory store
embedding = OpenAIEmbeddings()
vectorstore = FAISS.from_texts([
    "User is Abdulrahman and works on the Notion Arabia project",
    "Abdulrahman prefers conversations in Arabic"
], embedding=embedding)

# Use it as retrievable memory
long_term_memory = VectorStoreRetrieverMemory(retriever=vectorstore.as_retriever())

Combine with Short-Term Memory:

from langchain.memory import CombinedMemory

combined_memory = CombinedMemory(memories=[
    ConversationBufferMemory(),  # short-term
    long_term_memory             # long-term
])

conversation = ConversationChain(
    llm=llm,
    memory=combined_memory,
    verbose=True
)

Add New Knowledge to Long-Term Memory:

long_term_memory.save_context({"input": ""}, {"output": "Abdulrahman finished the OCR project last week"})

Now, whenever you start a new conversation, the agent can recall past facts to better serve you.


Tools and Libraries for Contextual AI

Tool Purpose
LangChain Memory Manage conversational memory
AutoGen Build multi-agent systems with shared context
CrewAI Coordinate multiple agents with shared task awareness
LangGraph Manage complex workflows with contextual transitions
ReAct Pattern Combine reasoning + action using past interaction context

Tips for Building Smart Contextual Agents

  1. Don’t inject everything — only include what’s relevant.
  2. Use long-term memory (e.g. FAISS, Redis) for persistent facts.
  3. Add clear task instructions in the system prompt.
  4. Integrate with external tools (calendar, files, APIs) for better context.

Summary

Contextual AI is what transforms assistants from reactive bots into true collaborators.

Once you teach your agent to remember, reason, and relate, it’ll stop asking “which email?” and start saying “I’ve sent the team update — meeting rescheduled to next Tuesday.”

AI

Abdulrahman Alfulayt Twitter

Passionate about coding, AI, and smart homes. I share what sparks my curiosity — from tools and side projects to tech experiments that often go off the beaten path.

Comments


Related Posts

Members Public

The Agentic AI Revolution: When AI Started Thinking Like an Independent Agent

A few years ago, AI tools were little more than "smart replies." You'd ask a question, they’d answer. You needed a summary, they gave it to you. It was helpful—sure—but always reactive. Humans were in charge, issuing commands, pulling the strings. Today, we

The Agentic AI Revolution: When AI Started Thinking Like an Independent Agent
Members Public

Why Effort No Longer Matters: How AI Is Redefining Value in Work and Life

Introduction: Has Work Lost Its Meaning? We’ve long believed that effort equals value. The longer you work, the more you produce. The harder you try, the more meaningful your results. That was the equation. But in the era of AI, this formula is no longer valid. Effort is no

Why Effort No Longer Matters: How AI Is Redefining Value in Work and Life