What is TAMS, and Why Does It Matter?

Screenshot of Norsk Studio's TAMS component

Your viewers expect to control when and how they watch—pausing live broadcasts, jumping to key moments, and watching multiple angles simultaneously. Behind the scenes, delivering these experiences has traditionally required complex infrastructure, duplicate storage, and format-specific integrations that slow down production workflows. TAMS (Time Addressable Media Store) makes it simple: one ingest, one storage system, and an HTTP API that lets any tool instantly access any moment of your media by simply requesting a time range. The API is open, allowing the creation of an ecosystem where multiple tools and algorithms can work together seamlessly on the same content, regardless of the service provider from which it originates.

Why TAMS?

TAMS defines a method for making media time-addressable. Every recorded segment is indexed against a timeline, allowing clients to request video not just as a continuous stream, but as specific time ranges. 

Instead of being something transient, video becomes a dataset that applications can explore. A sports broadcaster might fetch the last five minutes for an instant replay. A news outlet could trim and publish a highlight clip while the press conference is still underway. A streaming service could implement time-shifted playback, letting viewers pause and rewind without interrupting the live feed.

Because TAMS is HTTP-based, it naturally integrates into modern media delivery pipelines. Existing caching, CDNs, and player frameworks can all work with TAMS endpoints without requiring specialized integrations. 

The design is deliberately cloud-friendly. By separating media description (the JSON catalog) from media essence (the actual files or objects), TAMS can delegate storage to scalable backends, whether that’s cloud object storage, CDN caches, or local disk during development.

Core Concepts

To make media time-addressable, TAMS organizes content into a structured hierarchy:

  • Sources:  the recordings or live feeds ingested into the store.
  • Flows: logical timelines constructed from one or more sources.
  • Segments: the time-aligned chunks of media that clients can request directly.

Every Flow is assigned a Flow ID, a universally unique identifier (UUID) that provides a stable reference via the API. Segments within a Flow are indexed by time. Once created, they never change, meaning the same Flow ID and time range will always return the same media.

Sources provide an editorial layer of identity that groups different flows representing the same content. For example, a live football match could be ingested as both an H.264 stream and a JPEG2000 stream. Each encoding would have its own Flow ID, but they would share a Source ID, ensuring that a request for “10:15–10:30 of this Source” always corresponds to the same underlying moments in the match, regardless of format.

The Sources, Flows, and Segments are exposed through the TAMS API as JSON, available at well-defined URLs. Each response type carries distinct information, including the UUID and media essence parameters, such as codecs, container, and a time range. 

The content model also supports multiple-track flows, which act as collections of related single-track flows (e.g., a video stream plus two audio tracks). This separation allows you to store, query, and remix essences independently — a helpful feature if you want to add an alternative commentary track, a sign-language overlay, or a new audio mix without repackaging the entire stream.

Norsk’s TAMS Deployment

Norsk’s MediaStore forms the foundation for our TAMS implementation. It can record live streams or load existing media into a time-indexed store, where content can be replayed, clipped, or captured as a snapshot. By exposing this data through the open TAMS API, MediaStore presents TAMS sources and flows that a compliant client can interact with.

Because MediaStore is part of the larger Norsk Engine, you can seamlessly combine it with other Norsk features to build TAMS workflows. You can ingest from a wide range of inputs, including SRT, RTMP, MP4 files, and transport streams, and deliver media to TAMS clients without needing to deal with complex configurations or format-specific tuning.

Under the hood, Norsk keeps TAMS integration simple but robust:

  • Media is written into the Norsk MediaStore, which acts as the source of truth for all segment data. When a TAMS client queries a Flow, the API iterates directly over these stored segments to return precise, time-aligned results. There is no duplicate in-memory state to drift out of sync.
  • New streams and versions can be registered with TAMS at runtime. For example, when MediaStore creates a new writer configuration, the registration system notifies TAMS immediately, so that the new Flow is visible to clients without manual intervention.
  • Clients can discover and track Sources and Flows either through REST endpoints or via live event streams (e.g., WebSocket). This access pattern makes it easy to build monitoring dashboards, automated highlight clipping, or real-time notifications on top of the same store.

The result is the translation of live ingest into immutable, queryable timelines, automatically registering new flows and exposing a catalog of media segments that can be fetched, remixed, or transformed by a TAMS client. 

Norsk TAMS player

In addition to the core Media Store and TAMS API, we’ve also developed a TAMS player that connects to Norsk via the TAMS API. The player queries the API for available sources (live feeds or recordings), drills down into their flows (timelines in different formats), and then fetches the actual time-aligned segments for playback.

Here’s a screenshot of the player running in Norsk Studio, which is accepting an SRT stream, adds an onscreen graphic, and sends the composed stream to the TAMS output:

Screenshot of Norsk Studio's TAMS component

And here is a video showing the player running inside Norsk Studio, highlighting the scrubbing and thumbnails which are enabled through TAMS.

 

From the user’s perspective, the experience is simple:

  • They can select from a list of active sessions or past recordings.
  • Each session may expose multiple versions of the same content (for example, different codecs or bitrates).
  • Video and audio tracks can be combined on the fly, with the player assembling them into a single stream for playback.

Under the hood, the player polls the TAMS API to stay in sync with newly ingested material, builds the correct flow URLs, and hands them to a web player component. Since everything is HTTP-based, playback works in a regular browser without requiring plugins or special integration.

This player serves as a reference client for creating workflows using TAMS. It shows how straightforward it is to discover media, request precise time ranges, and play them back immediately using Norsk.

Want to learn more about TAMS (and more) in Norsk? Get in touch or set up a demo.

Author

  • Kelvin Kirima is a developer at id3as, proficient in Rust, JavaScript/Typescript, PostgreSQL, and web technologies such as HTML and CSS.

    View all posts