Free 30-min discovery call CT · NY · MA · RI · nationwide
~/case-studies $ cat

Bringing a 19,000-piece animation art gallery online

Client
Pending permission
Industry
Animation art retail
Engagement
Platform modernization · AI-native application · private AI cloud · managed operations
Timeline
Phased over ~9 months · ongoing
Results
  • 30% lift in overall revenue in the first year of the modernized platform
  • Online transactions unlocked for the first time in the gallery's history
  • Roughly 100 hours of migration work eliminated through automated Magento → WordPress migration
  • Discovery across the full 19,000-piece catalog through a conversational assistant grounded in live inventory
  • All commercially sensitive components hosted in our private AI cloud — clear data sovereignty and auditability
  • Ongoing operation by the team that built the system, under a predictable managed services engagement

Executive summary

A long-established animation art gallery with an inventory of approximately 19,000 pieces — original cels, production drawings, concept art, and limited editions from animated film and television — was operating on an aging Magento platform that had never successfully supported online transactions. Every sale still happened in person or over the phone. Their website was a liability rather than a channel.

Skyview Labs led a phased program that modernized the platform, unlocked online sales for the first time in the gallery’s history, and delivered a custom AI-native discovery assistant capable of reasoning over the full catalog. Within the first year of the modernized platform going live, the gallery’s online sales channel contributed a 30% lift to overall revenue.

The system is hosted in our private AI cloud in Marlborough, Massachusetts. Skyview Labs continues to operate it under a managed services engagement.

The situation

The gallery had been a respected name in its regional art market for decades. Their physical presence, their curation, and their relationships with artists and collectors were not in question. What was in question was everything that happened outside the gallery walls.

Their website ran on an aging Magento installation. The original developer had long since moved on. Routine changes took weeks. Hosting was expensive, and the platform had accumulated enough undocumented customization that nobody wanted to touch it. But the most consequential issue was that despite running on an e-commerce platform, the gallery had never been able to complete a sale online. Every transaction still required a phone call, an email exchange, or a visit to the gallery.

For a collection of roughly 19,000 pieces, the consequences of that limitation compounded quickly:

  • A collector in another state who discovered an artist through the gallery’s website had no way to purchase. They had to reach someone during business hours, often over multiple calls, before a piece could be held, invoiced, and shipped.
  • Only a small fraction of the full catalog was represented on the public site. The remaining inventory was effectively invisible to anyone not physically present.
  • The gallery’s staff were performing work that software should have been doing: matching prospective buyers to relevant pieces from memory, coordinating availability across the physical gallery and offsite storage, and manually processing every transaction.

The gallery’s leadership understood that the website was holding the business back. They had been quoted projects by multiple firms. None of the proposals addressed both the platform problem and the catalog-scale discovery problem in a way that felt deliverable.

Our approach

Skyview Labs was brought in after a discovery conversation that established two things. First, the underlying platform needed to be replaced before any new capability could sit on top of it. Second, the most valuable thing we could eventually deliver was not a better product page but a fundamentally different way for visitors to encounter the collection: one that worked at the scale of 19,000 pieces.

We proposed a three-phase engagement:

  1. Platform modernization. Replace the aging Magento installation with a modern, containerized architecture. Enable real online transactions, cleanly integrated with the gallery’s point-of-sale and payment systems.
  2. AI-native discovery application. Design and build a custom assistant that helps visitors find pieces through natural conversation, grounded in the full catalog.
  3. Inventory integration at catalog scale. Connect the assistant and the storefront to the gallery’s inventory system of record so the entire 19,000-piece collection becomes searchable, recommendable, and purchasable.

Each phase was scoped to deliver independent value. The gallery did not have to wait nine months for a single cutover. They saw measurable improvement at the end of each phase.

Phase one: platform modernization

The first phase replaced the Magento installation with a containerized WordPress architecture running in our private AI cloud. The decision to move to WordPress was driven by the client’s operational reality: their staff and their existing content workflows were built around editing experiences that WordPress handles well. The choice prioritized the gallery’s day-to-day team over architectural novelty.

More significant than the platform choice was the way we handled the migration itself. Our approach saved the client approximately 100 hours of work compared to a conventional rebuild by automating the extraction and transformation of content, taxonomies, and media from the Magento data store rather than reconstructing each section manually.

The new platform introduced capabilities the previous installation could not support:

Working online commerce. For the first time in the gallery’s history, a customer could complete a purchase without a phone call. This required custom integration work with the gallery’s existing point-of-sale system so that physical and online inventory stayed in sync, and with their payment processor so that transactions flowed cleanly into their existing reconciliation workflow. We deliberately integrated rather than replaced — the gallery’s back-office was working, and our job was to extend it, not disrupt it.

Containerized, orchestrated infrastructure. Every component of the new site runs in Docker containers orchestrated by Kubernetes in our Marlborough data center. This gives the gallery a platform that can be updated without downtime, scaled horizontally as traffic demands, and operated with the same rigor we apply to enterprise workloads.

Edge protection and zero-trust access. All public traffic reaches the site through Cloudflare Tunnels. The gallery’s infrastructure has no open inbound ports at the data center edge. Web application firewall, DDoS mitigation, and bot management run at Cloudflare’s edge, in front of every request. Administrative access to the platform uses zero-trust patterns rather than traditional VPN or public-facing admin surfaces.

The platform went live within the scoped timeline. Within the first year of operation, the online sales channel — a capability the gallery had never meaningfully had — contributed a 30% increase in overall revenue.

Phase two: the discovery assistant

Modernizing the platform solved the commerce problem. It did not solve the discovery problem. A 19,000-piece collection is not something a website can meaningfully browse. Even a well-organized category structure leaves visitors exploring a handful of tags at a time, seeing a tiny fraction of what the gallery actually holds.

The gallery’s founders had a specific intuition about the experience they wanted to recreate. When a customer walks into the physical gallery and describes their space — a living room that feels too sparse, a medical office that needs to feel calm, a restaurant whose entryway should make a statement — a good curator listens, asks a few questions, and leads them to the right wall. That interaction is an act of interpretation, not retrieval.

Our goal for phase two was to build an online experience that performed that same interpretive work at scale.

What we built

The discovery assistant is a conversational application integrated into the gallery’s website. A visitor can describe what they are looking for in plain language — “something calming for a dentist office,” “a bold piece for a restaurant entryway,” “blues and grays for over the sofa, no people” — and the assistant returns a curated set of recommendations drawn from the live catalog.

Architecture

The assistant is not a single model behind a chat window. It is a multi-component system, and each component runs on the infrastructure appropriate to its role:

Vision-based cataloging. Each piece in the collection is analyzed using OpenAI’s vision capabilities to extract a rich set of visual and thematic attributes: dominant colors, composition, subject matter, medium, mood, stylistic lineage, and the less-structured qualities that matter to collectors. This is a one-time operation per piece, run at ingestion or when inventory changes. Using a best-in-class external vision API for a low-frequency, high-value operation is the right economic decision. The gallery pays for exceptional analysis once per piece, not per visitor interaction.

Embedding and retrieval. The output of the vision analysis, combined with structured catalog metadata, is embedded and stored in a vector database inside our private cloud. When a visitor describes what they’re looking for, their request is embedded and matched against the catalog to produce a candidate set of relevant pieces. This layer never leaves our infrastructure.

Conversation and reasoning. The conversational experience is driven by a hybrid of self-hosted and external language models. A self-hosted open-weight LLM running in our Kubernetes cluster handles the majority of routine conversational work: greetings, clarifying questions, initial intent parsing, and response composition. For the subset of interpretive tasks that benefit from a higher reasoning ceiling — understanding what “calming for a dentist office” actually implies about color palette, subject matter, and energy — the assistant routes to Anthropic’s Claude API. The division is deliberate: the gallery gets production-grade reasoning where it matters most, and cost-predictable, privately-hosted inference for the bulk of traffic.

Inventory integration. The assistant is grounded in live inventory. Recommendations reflect what is actually available for sale right now, pulled through integration with the gallery’s system of record. A piece that was sold an hour ago does not appear. A piece added to the catalog this morning is already in play.

Point-of-sale and payment continuity. When a recommendation leads to a purchase, the transaction flows through the same POS and payment integrations built in phase one. There is no separate purchasing path for AI-driven sales. The assistant is a discovery layer, not a parallel commerce system.

What this means in practice

A visitor arrives on the site, describes their situation, and within seconds is looking at pieces from the gallery’s collection that a thoughtful curator might have pulled for them. They can refine: “warmer,” “smaller,” “nothing abstract,” “by a local artist.” The assistant responds in context. When they find a piece that feels right, they can purchase it directly.

The experience reaches across the full 19,000-piece catalog, not the subset a human could hold in working memory.

Phase three: catalog-scale integration

Phase three is in active deployment. The objective is to complete the integration between the gallery’s inventory system of record and the assistant’s retrieval layer so that every piece the gallery owns — including offsite storage, consignments, and newly acquired work — becomes available through the discovery experience automatically and in real time.

At the time of this writing, a growing percentage of the 19,000-piece catalog is integrated. The remainder is being brought in through a combination of automated ingestion and curator review. Once complete, every piece the gallery acquires in the future will flow into the assistant’s catalog on the same day it is added to inventory, with vision analysis, embedding, and indexing handled by the platform without human intervention.

Hosting and operations

Every Skyview-operated component of the gallery’s platform runs in our private AI cloud in Marlborough, Massachusetts. The site, the assistant, the vector database, the self-hosted LLM, and the supporting services are containerized and orchestrated in Kubernetes.

Why private hosting matters here. The gallery’s catalog, their inventory data, their customer interactions with the assistant, and their sales data are commercially sensitive. Running this system on a public, multi-tenant cloud would expose it to categories of risk the gallery did not want to carry: shared-tenant inference, API-level logging by foundation-model providers, data-residency ambiguity. By hosting the core system in our private cloud, we give the gallery clear answers to every question about where their data is and who has access to it.

The external APIs, explained. The two external components of the architecture — OpenAI Vision for cataloging and Anthropic Claude for higher-reasoning interpretation — are used deliberately and transparently. Vision processing is a one-time-per-piece cataloging operation, not a per-visitor data flow. Claude is used as a reasoning assist for specific interpretive prompts, not as the primary chat surface. Both choices reflect our operating principle: use the right tool for each job, be honest about where each component runs, and keep the hot paths on infrastructure the client can trust.

Security posture. The platform operates behind Cloudflare Tunnels, with no public inbound exposure at our data center edge. Edge-level WAF, DDoS protection, and bot management filter traffic before it reaches our infrastructure. Administrative access uses zero-trust authentication. Secrets are managed in an encrypted store with access scoped to the services that need them.

Managed services. Skyview Labs continues to operate the platform under a monthly engagement. We handle hosting, monitoring, performance tuning, model and dependency updates, security patching, and ongoing refinement of the assistant as visitor usage patterns evolve. The gallery does not have a dev shop on retainer. They have the engineers who built the system continuing to run it.

What this engagement illustrates about how Skyview Labs works

We start with the foundation, not the feature. The gallery wanted an AI discovery experience from the first conversation. We could have built a chat interface on top of their broken platform and shipped a demo. We would have been setting them up for a second engagement to rebuild the thing we just added features to. Instead, we sequenced the work so each phase made the next one possible.

We integrate with systems of record, not around them. Most of the interesting AI work in a real business lives at the seams — where the new capability meets the inventory system, the POS, the payment processor, the CRM. Those seams are where inexperienced firms cut corners and where production systems fail. We treated the POS and payment integrations with the same seriousness as the assistant itself, because a discovery experience that surfaces pieces the gallery can’t actually sell would be worse than useless.

We use the right tool for each job and we’re honest about it. The assistant uses a self-hosted model for the hot path, an external reasoning model for a subset of interpretive work, and an external vision API for one-time cataloging. That’s a deliberate architecture, not a sign of indecision. We tell clients exactly where each component runs and why.

We run what we build. The gallery isn’t managing a software project. They’re using a working system that we operate. When inventory changes, when visitor patterns shift, when a model improves, the people responding to those changes are the same engineers who designed the system in the first place.

Is your organization in a similar position?

This engagement is a useful reference for organizations with any of the following characteristics:

  • A large, structured catalog or inventory where discovery is a bottleneck — retail, wholesale, specialty commerce, animation and fine art, real estate, manufacturing.
  • Legacy e-commerce or operational platforms that are expensive to maintain and limit what the business can do next.
  • Commercially or regulatorily sensitive data that should not be sent to public AI APIs at any significant volume.
  • A clear opportunity for a conversational or AI-native interface, but no internal AI team to design, build, and operate it.
  • Integration complexity — POS, payment, inventory, CRM — that requires a build partner who takes systems-of-record seriously.

If any of that describes your organization, we would like to talk. A 30-minute conversation is usually enough to tell whether the work we do is the right fit.

~/contact $ open

Want to talk about this work?

A 30-minute conversation is usually enough to tell whether we're the right partner for what you're working on.