December 1-5, 2025 | The Venetian, Las Vegas | Booth #1520

Redis at AWS re:Invent ’25

We’re heading back to Las Vegas, and our team is ready to talk all about building fast, accurate AI apps and more.

Redis Hallucination Hub

Step into the Hallucination Hub

We’re bringing GenAI quirks to life and leaning into absurd AI “hallucinations.” Join us as we blur the lines between sensory fact and fiction with delicious food and cocktails, all while discussing the future of AI apps.

Community

See us at booth #1520

Talk to our team, listen to our lightning talks, get a live demo, and grab some swag.

See what we have in store
Marketing Full

Kick off the week in style

Save your spot for our exclusive happy hour at Flight Club in the Grand Canal Shoppes at the Venetian.

Get the details
Partners (customer favorite)

Meet with our team

Get the inside track from our execs to learn what we have in store for the future of AI apps.

Book a meeting

Don’t miss our speaking sessions

Mon, Dec 1 | 5:30 PM

How Sky powers the future of streaming with Redis Cloud

API209-S | CustSpeakSub.12448

Learn how Sky delivers seamless, real-time streaming to millions with Redis Cloud at its core. During the Peacock-exclusive NFL Wild Card game, Redis powered 140,000+ writes per second with sub-millisecond latency and cross-region replication. Write-behind persistence enabled cost-efficient scaling and migration to AWS Keyspaces, while vector search and semantic caching now drive Sky’s GenAI platform, Genapi. Discover the architecture and innovations behind Sky’s transformation from real-time performance to AI-powered personalization.

Understand how Redis Cloud makes possible:

  • Cross-region Active-Active replication to keep millions of users in sync worldwide.
  • Sub-millisecond latency and intelligent auto-scaling to handle record traffic peaks.
  • Write-behind persistence that reduced costs while unlocking Sky’s migration to AWS Keyspaces.

Vector search and semantic caching powering Sky’s GenAI platform, Genapi, to deliver personalized, AI-driven content discovery.

Jon
Jon Fritz

Redis
Chief Product Officer

Lena
Lena Youssef

Sky
Engineering Manager

Pin
The Venetian | Lido 3006

Tues, Dec 2 | 11:30 AM

Powering real-time apps with a modernized cache

API210-S

Build faster, more efficient real-time apps by taking caching beyond simple key-value lookups. Redis 8 adds advanced features that tackle common performance bottlenecks and give developers more control over data at scale.

This session covers:

  • Client-side caching: Eliminate hot keys and reduce latency by keeping frequently accessed data close to your app.
  • Redis Query Engine: Query nested JSON and complex data types in memory with secondary indexing and rich search.
  • Redis 8.2 updates: Cut JSON memory usage and leverage new commands for greater performance.
  • Redis Data Integration (RDI): Stream data between Redis and external systems to build unified, real-time architectures.

See how these capabilities power high-performance apps across e-commerce, fintech, healthcare, and AI.

Peter
Pieter Cailliau

Redis
VP of Product Management

Pin
Mandalay Bay | Oceanside B

Don’t miss our speaking sessions

Theater sessions

Catch fast talks and demos, live at our booth.

MONDAY - THURSDAY

Bringing AI Agents to production

How do you build an AI agent that truly remembers? The Applied AI Engineering team at Redis built an internal answer engine that learns from the company’s collective knowledge, combining curated content with evolving short- and long-term memory. Join us to learn how we built and scaled this system on ECS and Redis using Agent Memory Server, an open-source framework developed by Redis to power intelligent, memory-driven agents.

Mon, Dec 1 | 4:20 PM | 5:20 PM | 6:20 PM
Tue, Dec 2 | 10:15 AM | 12:15 PM | 4:40 PM | 5:50 PM
Wed, Dec 3 | 11:15 AM | 1:45 PM | 3:45 PM | 5:20 PM
Thu, Dec 4 | 10:45 AM | 12:15 PM | 1:45 PM | 3:15 PM

Brian Sam-Bodden
Brian Sam-Bodden

Principal Applied AI Engineer
Redis

Robert Shelton
Robert Shelton

Applied AI Engineer
Redis

Justin Cechmanek
Justin Cechmanek

Senior Applied AI Engineer
Redis

MONDAY - THURSDAY

Speeding up your applications with Redis Data Integration (RDI)

What if your cache never went cold? With Change Data Capture (CDC) and Redis Data Integration (RDI), your cache shifts from a passive store to a proactive, real-time data layer. This session shows how continuous cache warming eliminates cold starts, cache stampedes, and latency spikes—keeping your critical data always hot, always ready, and always close.

Mon, Dec 1 | 4:40 PM | 6:00 PM | 6:20 PM
Tue, Dec 2 | 10:45 AM | 12:45 PM | 2:45 PM | 5:00 PM
Wed, Dec 3 | 10:15 AM | 12:15 PM | 2:15 PM | 4:15 PM | 5:40 PM
Thu, Dec 4 | 11:15 AM | 12:45 PM | 2:15 PM

Ricardo Ferreira
Ricardo Ferreira

Lead Developer Advocate
Redis

Trevor Martin
Trevor Martin

Solution Architect
Redis

Ryan Dockman
Ryan Dockman

Solution Architect
Redis

MONDAY - THURSDAY

What's new with Redis

Redis 8.2 delivers measurable latency, throughput and memory gains compared to Redis 7.4 and is GA across Redis Cloud plans. These improvements make Redis cheaper to run at scale and more predictable during heavy load and resharding — critical for caching, search, and modern AI/ML workloads. Redis Open Source 8.4 continues this momentum with developer-focused features that simplify clustering, search, and JSON memory.

Mon, Dec 1 | 5:00 PM | 6:40PM
Tue, Dec 2 | 11:15 AM | 1:45 PM | 3:45 PM | 5:20 PM
Wed, Dec 3 | 12:45 AM | 2:45 PM | 4:40 PM
Thu, Dec 4 | 10:15 AM | 11:45 PM | 1:15 PM | 2:45 PM

Talon Miller
Talon Miller

Principal Technical Marketer
Redis

Christopher Marcotte
Christopher Marcotte

Senior Solution Architect
Redis

Jessica Emmons
Jessica Emmons

Senior Solution Architect
Redis

MONDAY - TUESDAY

Redis x Baseten — Ship faster agents with low-latency embedding model inference

Redis vector database powers high-performance agents and other real-time AI applications. But a fast vector database needs to be matched with a performant inference service that can support both high-throughput backfill and low-latency user queries. This talk introduces the major considerations for accelerating embedding model inference across runtime and infrastructure to serve open-source and fine-tuned embedding models at scale in production.

Mon, Dec 1 | 5:40 PM
Tue, Dec 2 | 11:45 AM

Philip Kiely
Philip Kiely

Head of DevRel
Baseten

TUESDAY

Redis x Weights & Biases — Building smarter RAG systems with Redis & Weave

Learn how to combine Redis for fast vector search with W&B Weave's end-to-end observability in your retrieval-augmented generation (RAG) pipelines. We’ll walk through building a reproducible workflow: from data ingestion and embedding to retrieval, reranking, and response generation, all while tracking performance and behavior at each step.

Tue, Dec 2 | 1:15 AM

Anu Vatsa
Anu Vatsa

Sr. AI Solutions Engineer, Pre-Sales
Weights & Biases

TUESDAY

Redis x Stacker Group — Unlocking real-time context engineering

Join Redis and Stacker Group for a deep dive into the new Real-Time Decisioning Stack (RDS) — a breakthrough architecture that moves enterprise decisions from 300–900ms to 5–10ms, enabling true real-time Next Best Action (NBA) and Next Best Offer (NBO).

RDS unifies Redis Enterprise, Redis Data Integration (RDI), Featureform’s online feature store, and Stacker’s Relight Decision Memory to deliver continuously learning, intent-aware decisions — powered by live data, LLM-ready context engineering, and built-in attribution.

Discover how enterprises can act instantly on customer behavior, reduce churn, improve conversions, and operationalize AI with a decisioning engine that learns and proves ROI with every interaction.

Tue, Dec 2 | 2:15 AM

Charlie Henderson
Charlie Henderson

Founder
Stacker Group

TUESDAY - WEDNESDAY

Redis x Arcade AI — Prime Yesterday: the AI that knows what you need before you do

If a travel agent tells you about a trip, but can't book the trip for you, are they really a travel agent? Great agents do real work for you. Yet connecting agents to enterprise systems is not as easy as downloading the latest MCP server. Come see how Arcade and Redis can help you build agents that send emails, update support tickets, and much much more. Oh, and one more thing. Your security team will love it too!

Tue, Dec 2 | 3:15 PM
Wed, Dec 3 | 11:45 AM

Shub Argha
Shub Argha

Head of Solutions Engineering
Arcade AI

TUESDAY - WEDNESDAY

Redis x Tensormesh — Scaling AI inference beyond single GPU limits with Redis

AI inference hits hard limits fast: GPU memory caps context windows, latency climbs with load, and costs spiral as you scale. Tensormesh solves this by intelligently caching and reusing computations across distributed infrastructure cutting GPU costs 5-10× while accelerating inference. By integrating Redis as a high-performance, platform-independent KV cache layer, we enable shared memory across multiple hosts, infinite horizontal scaling, and built-in failover. This session demonstrates how Redis + Tensormesh delivers more concurrent users and enterprise-grade reliability without cloud provider lock-in. Overall, turning your AI infrastructure into a competitive advantage.

Tue, Dec 2 | 4:15 PM
Wed, Dec 3 | 3:15 PM

Bryan Bamford
Bryan Bamford

Marketing Manager
Tensormesh

WEDNESDAY

Redis x Tavily — Onboarding the next billion agents to the web

Many AI agents struggle because their data is outdated, retrieval is slow or context is forgotten. In this session Dean will show how combining the vector and query power of Redis Query Engine with the live web search capabilities of Tavily enables agents to rapidly access fresh information and remember it over time. You’ll see how agents can go beyond one-shot retrieval into persistent memory, low latency search and real-time web refreshes, improving responsiveness, accuracy and user experience. Whether your agents are in assistive chatbots, enterprise agents or edge devices, you’ll walk away with patterns you can apply in your own stack.

Wed, Dec 3 | 10:45 AM

Dean Sacoransky
Dean Sacoransky

Lead Forward Deployed Engineer
Tavily

WEDNESDAY

Redis x LangChain — Context Engineering: building effective and reliable agents

AI applications typically involve several steps, some of which include calls to LLMs. LLMs are non-deterministic and difficult to make reliable. The more steps you have, the more opportunities for something to go wrong. Context plays a crucial role in getting the LLM to do what you want. Mismanaged context leads to unpredictable results. It’s a difficult problem to solve, and there is no single, well-defined solution. Join us and learn about the strategies being developed at LangChain and Redis to approach context engineering.

Wed, Dec 3 | 1:15 PM

Robert Xu
Robert Xu

Deployed Engineer
LangChain

Justin Cechmanek
Justin Cechmanek

Senior Applied AI Engineer
Redis

Theater sessions

Catch fast talks and demos, live at our booth.

Catch our latest demos

Celebrity face match with vector sets

Find your celebrity lookalike in this interactive demo designed around Redis Vector Sets.

Meet Haink, our internal AI agent

See how our internal AI agent, Haink, integrates with Slack and use RAG and memory.

Redis Data Integration (RDI) in action

Learn how Redis Data Integration makes it easier to keep Redis in sync with your other databases.

Redis live chat

Interact with a live chat agent to understand how we enable agent memory.

Faster from the start

Accelerate performance and innovation with Redis on AWS.