An XML Schema for the Controlling Multiple
Streams for Telepresence (CLUE) Data Model
University of Napoli
Via Claudio 21
80125
Napoli
Italy
roberta.presta@unina.it
University of Napoli
Via Claudio 21
80125
Napoli
Italy
spromano@unina.it
ART
CLUE Working Group
CLUE
Telepresence
Data Model
Framework
This document provides an XML schema file
for the definition of CLUE data model types. The term "CLUE" stands for
"Controlling Multiple Streams for Telepresence" and is the name of
the IETF working group in which this document, as well as other
companion documents, has been developed. The document defines a coherent
structure for information associated with the description of a
telepresence scenario.
Introduction
This document provides an XML schema file
for the definition of CLUE data model types. For the benefit of the
reader, the term "CLUE" stands for
"Controlling Multiple Streams for Telepresence" and is the name of
the IETF working group in which this document, as well as other
companion documents, has been developed.
A thorough definition of the CLUE
framework can be found in .
The schema is based on information contained in
.
It encodes information and constraints defined in the
aforementioned document in order to provide a formal representation
of the concepts therein presented.
The document specifies the definition of a coherent structure for
information associated with the description of a telepresence
scenario. Such information is used within the CLUE protocol messages
, enabling the dialogue between
a Media Provider and a Media Consumer. CLUE protocol messages, indeed,
are XML messages allowing (i) a Media Provider to advertise its
telepresence capabilities in terms of media captures, capture scenes,
and other features envisioned in the CLUE framework, according to the
format herein defined and (ii) a Media Consumer to request the
desired telepresence options in the form of capture encodings,
represented as described in this document.
Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED",
"MAY", and "OPTIONAL" in this document are to be interpreted as
described in BCP 14
when, and only when, they appear in all capitals, as shown here.
Definitions
This document refers to the same definitions used in
, except for the "CLUE
Participant" definition.
We briefly recall herein some of the main terms used in the document.
- Audio Capture:
- Media Capture for audio. Denoted as "ACn" in the examples in this
document.
- Capture:
- Same as Media Capture.
- Capture Device:
- A device that converts physical input,
such as audio, video, or text, into an electrical signal, in most
cases to be fed into a media encoder.
- Capture Encoding:
- A specific encoding of a Media Capture,
to be sent by a Media Provider to a Media Consumer via RTP.
- Capture Scene:
- A structure representing a spatial region
captured by one or more Capture Devices, each capturing media
representing a portion of the region. The spatial region represented
by a Capture Scene may correspond to a real region in physical space,
such as a room. A Capture Scene includes attributes and one or more
Capture Scene Views, with each view including one or more Media
Captures.
- Capture Scene View (CSV):
- A list of Media Captures of the same
media type that together form one way to represent the entire
Capture Scene.
- CLUE Participant:
-
This term is imported from the CLUE protocol document
.
- Consumer:
- Short for Media Consumer.
- Encoding or Individual Encoding:
-
A set of parameters representing a way to encode a Media Capture to
become a Capture Encoding.
- Encoding Group:
-
A set of encoding parameters representing a total
media encoding capability to be subdivided across potentially
multiple Individual Encodings.
- Endpoint:
- A CLUE-capable device that is the logical point
of final termination through receiving, decoding and rendering,
and/or initiation through capturing, encoding, and sending of media
streams. An endpoint consists of one or more physical devices
that source and sink media streams, and exactly one
participant (which, in turn, includes exactly one SIP User Agent). Endpoints can be anything from multiscreen/multicamera rooms to
handheld devices.
- Media:
- Any data that, after suitable encoding, can be
conveyed over
RTP, including audio, video, or timed text.
- Media Capture:
- A source of Media, such as from one or
more Capture Devices or constructed from other media streams.
- Media Consumer:
-
A CLUE-capable device that intends to receive Capture Encodings.
- Media Provider:
-
A CLUE-capable device that intends to send Capture Encodings.
- Multiple Content Capture (MCC):
-
A Capture that mixes and/or switches other Captures of a single
type (for example, all audio or all video). Particular Media Captures
may or may not be present in the resultant Capture Encoding
depending on time or space. Denoted as "MCCn" in the example cases
in this document.
- Multipoint Control Unit (MCU):
- A CLUE-capable device
that connects
two or more endpoints together into one single multimedia
conference .
An MCU includes a Mixer, similar to those in
, but
without the requirement to send media to
each participant.
- Plane of Interest:
-
The spatial plane within a scene containing the most-relevant subject matter.
- Provider:
- Same as a Media Provider.
- Render:
- The process of generating a representation from Media, such
as displayed motion video or sound emitted from loudspeakers.
- Scene:
- Same as a Capture Scene.
- Simultaneous Transmission Set:
-
A set of Media Captures that can be transmitted simultaneously
from a Media Provider.
- Single Media Capture:
-
A capture that contains media from a single
source capture device, e.g., an audio capture from a single
microphone or a video capture from a single camera.
- Spatial Relation:
-
The arrangement of two objects in space, in contrast to relation in
time or other relationships.
- Stream:
-
A Capture Encoding sent from a Media Provider to a Media Consumer
via RTP .
- Stream Characteristics:
- The media stream attributes commonly used in
non-CLUE SIP/SDP environments (such as media codec, bitrate,
resolution, profile/level, etc.) as well as CLUE-specific
attributes, such as the Capture ID or a spatial location.
- Video Capture:
- A Media Capture for video.
XML Schema
This section contains the XML schema for the CLUE data model definition.
The element and attribute definitions are formal representations of the
concepts
needed to describe the capabilities of a Media Provider and the streams
that are requested by a Media Consumer given the Media Provider's
ADVERTISEMENT .
The main groups of information are:
-
- <mediaCaptures>:
- the list of media captures available
()
- <encodingGroups>:
- the list of encoding groups
()
- <captureScenes>:
- the list of capture scenes
()
- <simultaneousSets>:
- the list of simultaneous transmission
sets ()
- <globalViews>:
- the list of global views sets
()
- <people>:
- metadata about the participants represented
in the telepresence session ()
- <captureEncodings>:
- the list of instantiated capture
encodings ()
All of the above refer to concepts that have been
introduced in
and further detailed in this document.
Acceptable values (enumerations) for this type are managed
by IANA in the "CLUE Schema <personType>" registry,
accessible at https://www.iana.org/assignments/clue.
Acceptable values (enumerations) for this type are managed
by IANA in the "CLUE Schema <view>" registry,
accessible at https://www.iana.org/assignments/clue.
Acceptable values (enumerations) for this type are managed
by IANA in the "CLUE Schema <presentation>" registry,
accessible at https://www.iana.org/assignments/clue.
Acceptable values (enumerations) for this type are managed by
IANA in the "CLUE Schema <sensitivityPattern>" registry,
accessible at https://www.iana.org/assignments/clue.
]]>
The following sections describe the XML schema in more detail. As a general
remark, please notice that optional elements that don't define what
their absence means are intended to be associated with undefined
properties.
<mediaCaptures>
<mediaCaptures> represents the list of one or more media
captures available at the Media Provider's side.
Each media capture is represented by a <mediaCapture>
element ().
<encodingGroups>
<encodingGroups> represents the list of
the encoding groups organized on the Media Provider's side.
Each encoding group is represented by an
<encodingGroup> element ().
<captureScenes>
<captureScenes> represents the list of
the capture scenes organized on the Media Provider's side.
Each capture scene is represented by a
<captureScene> element ().
<simultaneousSets>
<simultaneousSets> contains the simultaneous
sets indicated by the Media Provider.
Each simultaneous set is represented by a
<simultaneousSet> element ().
<globalViews>
<globalViews> contains a set of alternative representations of
all the scenes that are offered by a Media Provider to a Media Consumer.
Each alternative is named "global view", and it is represented by a
<globalView> element ().
<captureEncodings>
<captureEncodings> is a list of capture
encodings.
It can represent the list of the desired
capture encodings indicated by the Media Consumer
or the list of instantiated captures on the
provider's side.
Each capture encoding is represented by a
<captureEncoding> element ().
<mediaCapture>
A media capture is the
fundamental representation of a media flow
that is available on the provider's side.
Media captures are characterized by (i) a set of features
that are independent from the specific type of medium
and (ii) a set of features that are media specific.
The features that are common to all media types appear within
the media capture type, which has been designed as an abstract
complex type.
Media-specific captures, such as video captures,
audio captures, and others, are specializations of that abstract
media capture type, as in a typical generalization-specialization
hierarchy.
The following is the XML schema definition of the
media capture type:
]]>
captureID Attribute
The "captureID" attribute is a mandatory field
containing the identifier of the media capture.
Such an identifier serves as the way the capture is referenced from
other data model elements (e.g., simultaneous sets, capture encodings,
and others via <mediaCaptureIDREF>).
mediaType Attribute
The "mediaType" attribute is a mandatory attribute specifying
the media type of the capture.
Common standard values are "audio", "video", and "text", as defined in
.
Other values can be provided. It is assumed that implementations agree
on the interpretation of those other values.
The "mediaType" attribute is as generic as possible. Here is why: (i)
the basic media capture type is an abstract one; (ii) "concrete"
definitions for the standard audio, video,
and text capture types have been specified; (iii) a generic
"otherCaptureType" type has been defined; and (iv) the "mediaType"
attribute has been generically defined as a string, with no particular
template.
From the considerations above, it is clear that if one chooses to rely
on a brand new media type and wants to interoperate with others, an
application-level agreement is needed on how to interpret such
information.
<captureSceneIDREF>
<captureSceneIDREF> is a mandatory field
containing the value of the identifier of the capture scene
the media capture is defined in, i.e., the value of the
sceneID attribute () of
that capture scene.
Indeed, each media capture MUST be defined within
one and only one capture scene.
When a media capture is spatially definable, some spatial
information is provided along with it in the form
of point coordinates (see ).
Such coordinates refer to the space of coordinates defined
for the capture scene containing the capture.
<encGroupIDREF>
<encGroupIDREF> is an optional field
containing the identifier of the encoding group
the media capture is associated with, i.e., the value of the
encodingGroupID
attribute () of that encoding group.
Media captures that are not associated with any encoding group cannot
be instantiated as media streams.
<spatialInformation>
Media captures are divided into two categories:
(i) non spatially definable captures and
(ii) spatially definable captures.
Captures are spatially definable when at least it is possible to
provide (i) the coordinates of the device position within the telepresence
room of origin (capture point) together with its capturing direction
specified by a second point (point on line of capture)
or (ii) the represented area within
the telepresence room, by listing the coordinates of the four coplanar
points identifying the plane of interest (area of capture).
The coordinates of the above mentioned points MUST be expressed
according to the coordinate space of the capture scene the media
captures belong to.
Non spatially definable captures cannot be characterized
within the physical space of the telepresence room of origin.
Captures of this kind are, for example,
those related to recordings, text captures,
DVDs, registered presentations,
or external streams
that are played in the telepresence room
and transmitted to remote sites.
Spatially definable captures represent a
part of the telepresence room.
The captured part of the telepresence room is described
by means of the <spatialInformation> element.
By comparing the <spatialInformation> element
of different media captures within
the same capture scene,
a consumer can better determine the spatial
relationships between them and render them correctly.
Non spatially definable captures do not embed such elements
in their XML description:
they are instead characterized by having the
<nonSpatiallyDefinable> tag set to "true" (see
).
The definition of the spatial information
type is the following:
]]>
The <captureOrigin> contains the coordinates
of the capture device that is taking the capture (i.e., the capture
point) as well as, optionally, the pointing direction (i.e., the point
on line of capture); see .
The <captureArea> is an optional field
containing four points defining
the captured area covered by the capture
(see ).
The scale of the points coordinates is specified in
the scale attribute () of the capture scene
the media capture belongs to.
Indeed, all the spatially definable media captures referring to
the same capture scene share the same coordinate system and express
their spatial information according to the same scale.
<captureOrigin>
The <captureOrigin> element
is used to represent the position and optionally the line of
capture of a capture device.
<captureOrigin> MUST be included in spatially definable audio
captures, while it is optional for spatially definable video captures.
The XML schema definition of the <captureOrigin>
element type is the following:
]]>
The point type contains three spatial coordinates
(x,y,z) representing a point
in the space associated
with a certain capture scene.
The <captureOrigin> element includes a
mandatory <capturePoint> element and an optional
<lineOfCapturePoint> element,
both of the type "pointType".
<capturePoint> specifies
the three coordinates identifying the position of the
capture device.
<lineOfCapturePoint> is another pointType element representing
the "point on line of capture", which gives the pointing direction
of the capture device.
The coordinates of the point on line of capture
MUST NOT be identical to the capture point coordinates.
For a spatially definable video capture, if the point on line of capture
is provided, it MUST belong to the region between
the point of capture and the capture area.
For a spatially definable audio capture,
if the point on line of capture is not provided,
the sensitivity pattern should be considered omnidirectional.
<captureArea>
<captureArea> is an optional element
that can be contained within the spatial information
associated with a media capture.
It represents the spatial area captured by
the media capture.
<captureArea> MUST be included in the spatial information of
spatially definable video captures, while it MUST NOT be associated
with audio captures.
The XML representation of that area is provided
through a set of four point-type elements,
<bottomLeft>, <bottomRight>, <topLeft>,
and <topRight>, that MUST be coplanar.
The four coplanar points are identified from
the perspective of the capture device.
The XML schema definition is the following:
]]>
<nonSpatiallyDefinable>
When media captures are non spatially definable,
they MUST be marked with the boolean
<nonSpatiallyDefinable>
element set to "true", and no <spatialInformation> MUST be
provided.
Indeed, <nonSpatiallyDefinable> and <spatialInformation>
are mutually
exclusive tags, according to the <choice> section within the XML
schema definition of the media capture type.
<content>
A media capture can be (i) an individual media capture or (ii)
an MCC.
An MCC is made by different captures
that can be arranged spatially
(by a composition operation), or temporally (by a switching operation),
or that can result from
the orchestration of both the techniques.
If a media capture is an MCC, then it MAY show in its XML
data model representation the
<content> element. It is composed by a list of media
capture identifiers ("mediaCaptureIDREF") and capture scene view
identifiers ("sceneViewIDREF"),
where the latter ones are
used as shortcuts to refer to multiple capture identifiers. The
referenced captures are used to create the MCC according to a certain
strategy. If the <content> element does not appear in an MCC,
or it has no child elements, then the MCC is assumed to be made
of multiple sources, but no information regarding those sources is
provided.
]]>
<synchronizationID>
<synchronizationID> is an optional element for multiple
content captures that
contains a numeric identifier.
Multiple content captures marked with the same identifier in the
<synchronizationID>
contain at all times captures coming from the same sources. It is
the Media Provider that determines what the source is for the captures.
In this way, the Media Provider can choose how to group together
single captures for the purpose of keeping them synchronized
according to the <synchronizationID> element.
<allowSubsetChoice>
<allowSubsetChoice> is an optional boolean element for
multiple content captures.
It indicates whether or not the Provider allows the Consumer to
choose a specific subset of the captures referenced by the MCC.
If this attribute is true, and the MCC references other captures,
then the Consumer MAY specify in a CONFIGURE message a specific
subset of those captures to be included in the MCC, and the
Provider MUST then include only that subset. If this attribute is
false, or the MCC does not reference other captures, then the
Consumer MUST NOT select a subset. If <allowSubsetChoice>
is not shown in the XML description of the
MCC, its value is to be considered "false".
<policy>
<policy> is an optional element
that can be used only for multiple content captures.
It indicates the criteria applied to build the multiple content capture
using the media captures referenced in the <mediaCaptureIDREF>
list.
The <policy> value is in the form of a token that indicates the
policy and an index representing an instance of the policy, separated
by a ":" (e.g., SoundLevel:2, RoundRobin:0, etc.).
The XML schema defining the type of the <policy> element
is the following:
]]>
At the time of writing, only two switching policies are defined; they are in
as follows:
- SoundLevel:
- This indicates that the content of the MCC is determined
by a sound-level-detection algorithm. The loudest (active)
speaker (or a previous speaker, depending on the index value) is
contained in the MCC.
- RoundRobin:
- This indicates that the content of the MCC is determined
by a time-based algorithm. For example, the Provider provides
content from a particular source for a period of time and then
provides content from another source, and so on.
Other values for the <policy> element can be used.
In this case, it is assumed that implementations agree on the
meaning of those other values and/or those new switching policies
are defined in later documents.
<maxCaptures>
<maxCaptures> is an optional element
that can be used only for MCCs.
It provides information about the number of media captures
that can be represented
in the multiple content capture at a time.
If <maxCaptures> is not provided, all the media captures listed
in the <content> element can appear at a time in the capture
encoding. The type definition is provided below.
]]>
When the "exactNumber" attribute is set to "true", it means
the <maxCaptures> element carries the exact number of the
media captures appearing at a time.
Otherwise, the number of the represented media captures MUST be
considered "<=" the <maxCaptures> value.
For instance, an audio MCC having the <maxCaptures> value set to 1
means that a media stream from the MCC will only contain
audio from a single one of its constituent captures at a time.
On the other hand, if the <maxCaptures> value is set to
4 and the exactNumber
attribute is set to "true", it would mean that the media stream
received from the MCC will always contain a mix of audio
from exactly four of its constituent captures.
<individual>
<individual> is a boolean element
that MUST be used for single-content captures.
Its value is fixed and set to "true".
Such element indicates the capture that is being described is not
an MCC.
Indeed, <individual> and the aforementioned
tags related to MCC attributes
(from Sections to
) are mutually
exclusive, according to the <choice> section within the XML schema
definition of the media capture type.
<description>
<description> is used to provide human-readable
textual information.
This element is included in the XML definition of media captures,
capture scenes, and capture scene views to provide
human-readable descriptions of, respectively,
media captures, capture scenes, and capture scene views.
According to the data model definition of a media capture
()), zero or more
<description> elements can be used, each
providing information in a different
language.
The <description> element definition is the following:
]]>
As can be seen, <description> is a
string element with an attribute ("lang") indicating
the language used in the textual description. Such an attribute is
compliant with the Language-Tag ABNF production
from .
<priority>
<priority>
is an optional unsigned integer field
indicating the importance of a media capture
according to the Media Provider's perspective.
It can be used on the receiver's side to
automatically identify
the most relevant contribution from
the Media Provider.
The higher the importance, the lower the contained value.
If no priority
is assigned, no assumptions regarding relative importance of the
media capture can be assumed.
<lang>
<lang> is an optional element
containing the language used in the capture.
Zero or more <lang> elements can appear in the XML description of
a media capture. Each such element has to be compliant with the
Language-Tag ABNF production from .
<mobility>
<mobility> is an optional element
indicating whether or not the capture device originating
the capture may move during the telepresence session.
That optional element can assume one of the three following values:
-
- static:
- SHOULD NOT change for the
duration of the CLUE session, across multiple ADVERTISEMENT messages.
- dynamic:
-
MAY change in each new ADVERTISEMENT message.
Can be assumed to remain unchanged until there is a
new ADVERTISEMENT message.
- highly-dynamic:
-
MAY change dynamically, even between consecutive ADVERTISEMENT messages.
The spatial information provided in an ADVERTISEMENT message is simply a
snapshot of the current values at the time when the message is sent.
<relatedTo>
The optional <relatedTo> element contains the
value of the captureID attribute
of the media capture to which the considered
media capture refers.
The media capture marked with a <relatedTo>
element can be, for example, the translation of the referred
media capture in a different language.
<view>
The <view> element is an optional tag describing what is
represented in the spatial area covered by a media capture.
It has been specified as a simple string with an annotation pointing to
an IANA registry that is defined ad hoc:
Acceptable values (enumerations) for this type are managed
by IANA in the "CLUE Schema <view>" registry,
accessible at https://www.iana.org/assignments/clue.
]]>
The current possible values, as per the CLUE framework document
, are: "room", "table",
"lectern", "individual", and "audience".
<presentation>
The <presentation> element is an optional tag used for media
captures conveying information about presentations within the
telepresence session. It has been specified as a simple string with an
annotation pointing to an IANA registry that is defined ad hoc:
Acceptable values (enumerations) for this type are managed
by IANA in the "CLUE Schema <presentation>" registry,
accessible at https://www.iana.org/assignments/clue.
]]>
The current possible values, as per the CLUE framework document
, are "slides" and "images".
<embeddedText>
The <embeddedText> element is a boolean
element indicating that there is text embedded
in the media capture (e.g., in a video capture).
The language used in such an embedded textual description
is reported in the <embeddedText> "lang" attribute.
The XML schema definition of the <embeddedText>
element is:
]]>
<capturedPeople>
This optional element is used to indicate which
telepresence session participants
are represented in within the media captures. For each participant, a
<personIDREF> element is provided.
<personIDREF>
<personIDREF> contains the identifier of the represented person,
i.e., the value of the related
personID attribute.
Metadata about the represented participant can be retrieved by accessing
the <people> list ().
Audio Captures
Audio captures inherit all the features of a generic
media capture and present further audio-specific
characteristics.
The XML schema definition of the audio
capture type is reported below:
]]>
An example of audio-specific information
that can be included is represented by the <sensitivityPattern>
element ().
<sensitivityPattern>
The <sensitivityPattern> element is an optional field
describing the characteristics of the nominal sensitivity pattern of the
microphone capturing the audio signal. It has been specified as a simple
string with an annotation pointing to an IANA registry that is defined ad hoc:
Acceptable values (enumerations) for this type are managed by
IANA in the "CLUE Schema <sensitivityPattern>" registry,
accessible at https://www.iana.org/assignments/clue.
]]>
The current possible values, as per the CLUE framework document
, are "uni", "shotgun", "omni",
"figure8", "cardioid", and "hyper-cardioid".
Video Captures
Video captures, similarly to audio captures,
extend the information of a generic media capture
with video-specific features.
The XML schema representation of the
video capture type is provided in the following:
]]>
Text Captures
Similar to audio captures and video captures, text captures can be described
by extending the generic media capture information.
There are no known properties of a text-based media that aren't
already covered by the generic mediaCaptureType. Text captures are hence
defined as follows:
]]>
Text captures MUST be marked as non spatially definable (i.e., they
MUST present in their XML description the
<nonSpatiallyDefinable> element set to "true").
Other Capture Types
Other media capture types can be described by using the CLUE data model.
They can be represented by exploiting the "otherCaptureType"
type.
This media capture type is conceived to be
filled in with elements defined within extensions of the current schema,
i.e., with
elements defined in other XML schemas
(see for an example).
The otherCaptureType inherits all the features envisioned for
the abstract mediaCaptureType.
The XML schema representation of the
otherCaptureType is the following:
]]>
When defining new media capture types that are going to be described
by means of the <otherMediaCapture> element,
spatial properties of such new media capture types SHOULD be defined
(e.g., whether or not they are spatially definable and whether or not they
should be associated with an area of capture or other properties
that may be defined).
<captureScene>
A Media Provider organizes the available captures
in capture scenes in order to help the receiver
in both the rendering and the selection of the group
of captures. Capture scenes are made of media captures and
capture scene views, which are sets of
media captures of the same media type.
Each capture scene view is an alternative
to completely represent a capture scene for a fixed
media type.
The XML schema representation of a <captureScene> element
is the following:
]]>
Each capture scene is identified by a "sceneID" attribute.
The <captureScene> element can contain zero or more
textual <description> elements, as defined in
.
Besides <description>, there is the optional
<sceneInformation>
element
(),
which contains structured
information about the scene in the vCard format, and the optional
<sceneViews> element
(), which is the list of
the capture scene
views.
When no <sceneViews> is provided, the capture scene is assumed to
be made of all the media captures that contain the value of its sceneID
attribute in their mandatory captureSceneIDREF attribute.
<sceneInformation>
The <sceneInformation> element contains optional information about
the
capture scene according to the vCard format, as specified in the xCard
specification .
<sceneViews>
The <sceneViews> element is a mandatory
field of a capture scene containing the list
of scene views.
Each scene view is represented by a <sceneView>
element ().
]]>
sceneID Attribute
The sceneID attribute is a mandatory attribute
containing the identifier of the capture scene.
scale Attribute
The scale attribute is a mandatory attribute
that specifies the scale of the coordinates
provided in the spatial
information of the media capture belonging to
the considered capture scene.
The scale attribute can assume three different values:
-
- "mm":
- the scale is in millimeters.
Systems that
know their physical dimensions
(for example, professionally
installed telepresence room systems)
should always provide such
real-world measurements.
- "unknown":
- the scale is the same for every media capture
in the capture scene, but the unity of measure is undefined.
Systems that are not aware of specific
physical dimensions yet still know
relative distances should select
"unknown" in the scale attribute of the
capture scene to be described.
- "noscale":
- there is no common physical scale
among the media captures of the capture scene.
That means the scale could be different for each
media capture.
]]>
<sceneView>
A <sceneView> element represents a
capture scene view, which contains a set of media
captures of the same media type describing
a capture scene.
A <sceneView> element is characterized as follows.
]]>
One or more optional <description> elements
provide human-readable information about what the scene
view contains. <description> is defined in .
The remaining child elements are described in the
following subsections.
<mediaCaptureIDs>
<mediaCaptureIDs> is the list of the
identifiers of the media captures included in the
scene view.
It is an element of the captureIDListType type, which is
defined as a sequence of <mediaCaptureIDREF>, each
containing the identifier of a media capture
listed within the <mediaCaptures> element:
]]>
sceneViewID Attribute
The sceneViewID attribute is a mandatory attribute
containing the identifier of the capture scene view
represented by the <sceneView> element.
<encodingGroup>
The <encodingGroup> element represents
an encoding group, which is made by a
set of one or more individual
encodings and some parameters that apply
to the group as a whole.
Encoding groups contain references to individual encodings
that can be applied to media captures.
The definition of the <encodingGroup> element
is the following:
]]>
In the following subsections, the contained elements are further described.
<maxGroupBandwidth>
<maxGroupBandwidth> is an optional field
containing the maximum bitrate expressed in bits per second that can be
shared by the individual encodings included in the encoding group.
<encodingIDList>
<encodingIDList> is the list
of the individual encodings grouped together in the encoding group.
Each individual encoding is represented
through its identifier contained within
an <encodingID> element.
]]>
encodingGroupID Attribute
The encodingGroupID attribute contains the
identifier of the encoding group.
<simultaneousSet>
<simultaneousSet> represents a simultaneous
transmission set, i.e., a list of captures of the same media type
that can be transmitted at the same time
by a Media Provider.
There are different simultaneous transmission sets
for each media type.
]]>
Besides the identifiers of the captures (<mediaCaptureIDREF>
elements), the identifiers of capture scene views and capture
scenes can also be exploited as shortcuts
(<sceneViewIDREF> and <captureSceneIDREF> elements).
As an example, let's consider the situation where there are two capture
scene views (S1 and S7).
S1 contains captures AC11, AC12, and AC13. S7 contains captures AC71 and AC72.
Provided that AC11, AC12, AC13, AC71, and AC72 can be simultaneously sent to
the Media Consumer, instead of having 5 <mediaCaptureIDREF>
elements listed in the simultaneous set (i.e., one
<mediaCaptureIDREF> for AC11, one for AC12, and so on), there can
be just two <sceneViewIDREF> elements (one for S1 and one for S7).
setID Attribute
The "setID" attribute is a mandatory field containing the identifier of
the simultaneous set.
mediaType Attribute
The "mediaType" attribute is an optional attribute containing the media
type of the
captures referenced by the simultaneous set.
When only capture scene identifiers are listed within a simultaneous
set, the media type attribute MUST appear in the XML description in
order to determine which media captures can be simultaneously sent
together.
<mediaCaptureIDREF>
<mediaCaptureIDREF> contains the identifier of the media
capture that belongs to the simultaneous set.
<sceneViewIDREF>
<sceneViewIDREF> contains the identifier of the scene
view containing a group of captures
that are able to be sent simultaneously with the other
captures of the simultaneous set.
<captureSceneIDREF>
<captureSceneIDREF> contains the identifier of the capture
scene where all the included captures of a certain media type
are able to be sent together with the other captures of the simultaneous
set.
<globalView>
<globalView> is a set of captures of the same media type
representing a summary of the complete Media Provider's offer.
The content of a global view is expressed
by leveraging only scene view identifiers, put within
<sceneViewIDREF> elements.
Each global view is identified by a unique identifier within the
"globalViewID" attribute.
]]>
<people>
Information about the participants that are represented in the media
captures is conveyed via the <people> element.
As it can be seen from the XML schema depicted below, for each
participant, a <person> element is provided.
]]>
<person>
<person> includes all the metadata related to a person
represented within one or more media captures.
Such element provides the vCard of the subject (via the
<personInfo> element; see )
and its conference role(s) (via one or more <personType> elements;
see ).
Furthermore, it has a mandatory "personID" attribute
().
]]>
personID Attribute
The "personID" attribute carries the identifier of a represented person.
Such an identifier can be used to refer to the participant,
as in the <capturedPeople> element in the media captures
representation ().
<personInfo>
The <personInfo> element is the XML representation of all the
fields composing a vCard as specified in the xCard document
.
The vcardType is imported by the xCard XML schema provided in .
As such schema specifies, the <fn> element within <vcard>
is mandatory.
<personType>
The value of the <personType> element determines the role of
the represented participant within the telepresence session
organization. It has been specified as a simple string with an
annotation pointing to an IANA registry that is defined ad hoc:
Acceptable values (enumerations) for this type are managed
by IANA in the "CLUE Schema <personType>" registry,
accessible at https://www.iana.org/assignments/clue.
]]>
The current possible values, as per the CLUE framework document
, are: "presenter",
"timekeeper", "attendee", "minute taker", "translator", "chairman",
"vice-chairman", and "observer".
A participant can play more than one conference role. In that case, more
than one <personType> element will appear in its description.
<captureEncoding>
A capture encoding is given from
the association of a media capture
with an individual encoding, to form a capture stream as defined in
.
Capture encodings are used within CONFIGURE messages from a Media
Consumer to a Media Provider for representing the streams desired by the
Media Consumer.
For each desired stream, the Media Consumer needs to be allowed to
specify: (i) the capture identifier of the desired capture that has been
advertised by the Media Provider; (ii) the encoding identifier of the
encoding to use, among those advertised by the Media Provider;
and (iii) optionally, in case of multicontent captures, the list of the
capture identifiers of the desired captures.
All the mentioned identifiers are intended to be included in the
ADVERTISEMENT message that the CONFIGURE message refers to.
The XML model of <captureEncoding> is provided in the following.
]]>
<captureID>
<captureID> is the mandatory element containing the identifier
of the media capture that has been encoded to
form the capture encoding.
<encodingID>
<encodingID> is the mandatory element containing the identifier
of the applied individual encoding.
<configuredContent>
<configuredContent> is an optional element to be used in case
of the configuration of MCC.
It contains the list of capture identifiers and capture scene view
identifiers the Media Consumer wants within the MCC.
That element is structured as the <content> element used to
describe the content of an MCC.
The total number of media captures listed in the
<configuredContent> MUST be lower than or equal to the value
carried within the <maxCaptures> attribute of the MCC.
<clueInfo>
The <clueInfo> element includes all the
information needed to represent the Media Provider's description of
its telepresence capabilities according to the
CLUE framework.
Indeed, it is made by:
- the list of the available media captures
(see "<mediaCaptures>", )
- the list of encoding groups
(see "<encodingGroups>", )
- the list of capture scenes
(see "<captureScenes>", )
- the list of simultaneous transmission sets
(see "<simultaneousSets>", )
- the list of global views sets
(see "<globalViews>", )
- metadata about the participants represented in the telepresence
session (see "<people>", )
It has been conceived only for data model testing purposes, and though
it resembles the body of an ADVERTISEMENT message, it is not actually
used in the CLUE protocol message definitions.
The telepresence capabilities descriptions compliant to
this data model specification that can be found in Sections
and
are provided by using the <clueInfo> element.
]]>
XML Schema Extensibility
The telepresence data model defined in this document is
meant to be extensible. Extensions are accomplished by defining
elements or attributes qualified by namespaces other than
"urn:ietf:params:xml:ns:clue-info" and
"urn:ietf:params:xml:ns:vcard-4.0" for use wherever the
schema allows such extensions (i.e., where the XML schema definition
specifies "anyAttribute" or "anyElement").
Elements or attributes from unknown namespaces MUST be ignored.
Extensibility was purposefully favored as much as possible based on
expectations about custom implementations. Hence, the schema offers
people enough flexibility as to define custom extensions, without
losing compliance with the standard. This is achieved by leveraging
<xs:any> elements and <xs:anyAttribute> attributes, which
is a common approach with schemas, while still matching the
Unique Particle Attribution (UPA) constraint.
Example of Extension
When extending the CLUE data model, a new schema
with a new namespace associated with it
needs to be specified.
In the following, an example of extension is provided. The extension
defines a new audio capture attribute ("newAudioFeature") and
an attribute for characterizing the captures belonging to an
"otherCaptureType" defined by the user.
An XML document compliant with the extension is also included.
The XML file results are validated against the current XML schema for the CLUE data model.
]]>
CS1
true
true
EG1
newAudioFeatureValue
CS1
true
EG1
OtherValue
300000
ENC4
ENC5
]]>
Security Considerations
This document defines, through an XML schema, a data model for telepresence
scenarios.
The modeled information is identified in the CLUE framework
as necessary in order to enable a full-fledged
media stream negotiation and rendering.
Indeed, the XML elements herein defined are used within
CLUE protocol messages to describe
both the media streams representing the Media Provider's telepresence
offer and the desired selection requested by the Media Consumer.
Security concerns described in
apply to this
document.
Data model information carried within CLUE messages SHOULD
be accessed only by authenticated endpoints.
Indeed, authenticated access is strongly advisable, especially if you
convey information about individuals (<personalInfo>) and/or
scenes (<sceneInformation>).
There might be more exceptions, depending on the level of criticality
that is associated with the setup and configuration of a specific
session. In principle, one might even decide that no protection at all
is needed for a particular session; here is why authentication
has not been identified as a mandatory requirement.
Going deeper into details, some information published by the Media
Provider might reveal sensitive data about who and what is represented
in the transmitted streams.
The vCard included in the <personInfo>
elements ()
mandatorily contains the identity of the represented person.
Optionally, vCards can also carry the person's contact addresses,
together with their photo and other personal data. Similar
privacy-critical information can be conveyed by means of
<sceneInformation> elements ()
describing the capture scenes.
The <description> elements () also
can specify details about the content of media captures, capture scenes,
and scene views that should be protected.
Integrity attacks to the data model information encapsulated
in CLUE messages can invalidate the success of the telepresence
session's setup by misleading the Media Consumer's and Media Provider's
interpretation of the offered and desired media streams.
The assurance of the authenticated access and of the integrity of the
data model information is up to the involved transport mechanisms,
namely the CLUE protocol
and the CLUE data channel .
XML parsers need to be robust with respect to malformed documents.
Reading malformed documents from unknown or untrusted sources could
result in an attacker gaining privileges of the user running the XML
parser. In an extreme situation, the entire machine could be
compromised.
IANA Considerations
This document registers a new XML namespace, a new XML schema, the media
type for the schema, and four new registries associated, respectively,
with acceptable <view>, <presentation>,
<sensitivityPattern>, and <personType> values.
XML Namespace Registration
- URI:
- urn:ietf:params:xml:ns:clue-info
- Registrant Contact:
- IETF CLUE Working Group <clue@ietf.org>,
Roberta Presta <roberta.presta@unina.it>
- XML:
CLUE Data Model Namespace
Namespace for CLUE Data Model
urn:ietf:params:xml:ns:clue-info
See
RFC 8846.
]]>
XML Schema Registration
This section registers an XML schema per the guidelines in
.
- URI:
- urn:ietf:params:xml:schema:clue-info
- Registrant Contact:
- CLUE Working Group (clue@ietf.org),
Roberta Presta (roberta.presta@unina.it).
- Schema:
- The XML for this schema can be found in its entirety in
of this document.
Media Type Registration for "application/clue_info+xml"
This section registers the "application/clue_info+xml" media type.
- To:
- ietf-types@iana.org
- Subject:
- Registration of media type application/clue_info+xml
- Type name:
- application
- Subtype name:
- clue_info+xml
- Required parameters:
- (none)
- Optional parameters:
- charset
Same as the charset parameter of "application/xml" as specified in
.
- Encoding considerations:
- Same as the encoding considerations of
"application/xml" as specified in .
- Security considerations:
- This content type is designed to carry
data related to telepresence information. Some of the data
could be considered private. This media type does not provide any
protection and thus other mechanisms such as those described in
are required to protect the data. This media type does
not contain executable content.
- Interoperability considerations:
- None.
- Published specification:
- RFC 8846
- Applications that use this media type:
- CLUE-capable telepresence
systems.
- Additional Information:
-
- Magic Number(s):
- none
- File extension(s):
- .clue
- Macintosh File Type Code(s):
- TEXT
- Person & email address to contact for further
information:
- Roberta Presta (roberta.presta@unina.it).
- Intended usage:
- LIMITED USE
- Author/Change controller:
- The IETF
- Other information:
- This media type is a specialization of
"application/xml" , and many of the considerations
described there also apply to "application/clue_info+xml".
Registry for Acceptable <view> Values
IANA has created a registry of acceptable values for the
<view> tag as defined in .
The initial values for this registry are "room", "table", "lectern",
"individual", and "audience".
New values are assigned by Expert Review per .
This reviewer will ensure that the requested registry entry conforms to
the prescribed formatting.
Registry for Acceptable <presentation> Values
IANA has created a registry of acceptable values for the
<presentation> tag as defined in
.
The initial values for this registry are "slides" and "images".
New values are assigned by Expert Review per .
This reviewer will ensure that the requested registry entry conforms to
the prescribed formatting.
Registry for Acceptable <sensitivityPattern> Values
IANA has created a registry of acceptable values for the
<sensitivityPattern> tag as defined in
.
The initial values for this registry are "uni", "shotgun", "omni",
"figure8", "cardioid", and "hyper-cardioid".
New values are assigned by Expert Review per .
This reviewer will ensure that the requested registry entry conforms to
the prescribed formatting.
Registry for Acceptable <personType> Values
IANA has created a registry of acceptable values for the
<personType> tag as defined in
.
The initial values for this registry are "presenter",
"timekeeper", "attendee", "minute taker", "translator", "chairman",
"vice-chairman", and "observer".
New values are assigned by Expert Review per .
This reviewer will ensure that the requested registry entry conforms to
the prescribed formatting.
Sample XML File
The following XML document represents a schema-compliant example
of a CLUE telepresence scenario.
Taking inspiration from the examples described in
the framework specification ,
the XML representation of an endpoint-style Media
Provider's ADVERTISEMENT is provided.
There are three cameras, where the central one is also capable of
capturing a zoomed-out
view of the overall telepresence room.
Besides the three video captures coming from the cameras,
the Media Provider makes available a further multicontent capture of
the loudest
segment of the room,
obtained by switching the video source across the three cameras.
For the sake of simplicity, only one audio capture is advertised for
the audio of the whole room.
The three cameras are placed in front of three participants
(Alice, Bob, and Ciccio),
whose vCard and conference role details are also provided.
Media captures are arranged into four capture scene views:
- (VC0, VC1, VC2) - left, center, and right camera video captures
- (VC3) - video capture associated with loudest room segment
- (VC4) - video capture zoomed-out view of all people in the room
- (AC0) - main audio
There are two encoding groups: (i) EG0, for video encodings, and (ii)
EG1, for audio encodings.
As to the simultaneous sets, VC1 and VC4 cannot be transmitted
simultaneously
since they are captured by the same device, i.e., the central camera
(VC4 is a zoomed-out view while
VC1 is a focused view of the front participant).
On the other hand, VC3 and VC4 cannot be simultaneous either,
since VC3, the loudest segment of the room,
might be at a certain point in time focusing on the central
part of the room, i.e., the same as VC1.
The simultaneous sets would then be the following:
-
- SS1:
- made by VC3 and all the captures in the first capture scene view (VC0,VC1,and VC2)
- SS2:
- made by VC0, VC2, and VC4
CS1
0.0
0.0
10.0
0.0
1.0
10.0
true
EG1
main audio from the room
1
it
static
room
alice
bob
ciccio
CS1
-2.0
0.0
10.0
-3.0
20.0
9.0
-1.0
20.0
9.0
-3.0
20.0
11.0
-1.0
20.0
11.0
true
EG0
left camera video capture
1
it
static
individual
ciccio
CS1
0.0
0.0
10.0
-1.0
20.0
9.0
1.0
20.0
9.0
-1.0
20.0
11.0
1.0
20.0
11.0
true
EG0
central camera video capture
1
it
static
individual
alice
CS1
2.0
0.0
10.0
1.0
20.0
9.0
3.0
20.0
9.0
1.0
20.0
11.0
3.0
20.0
11.0
true
EG0
right camera video capture
1
it
static
individual
bob
CS1
-3.0
20.0
9.0
3.0
20.0
9.0
-3.0
20.0
11.0
3.0
20.0
11.0
SE1
SoundLevel:0
EG0
loudest room segment
2
it
static
individual
CS1
0.0
0.0
10.0
-3.0
20.0
7.0
3.0
20.0
7.0
-3.0
20.0
13.0
3.0
20.0
13.0
true
EG0
zoomed-out view of all people
in the room
2
it
static
room
alice
bob
ciccio
600000
ENC1
ENC2
ENC3
300000
ENC4
ENC5
VC0
VC1
VC2
VC3
VC4
AC0
VC3
SE1
VC0
VC2
VC4
Bob
minute taker
Alice
presenter
Ciccio
chairman
timekeeper
]]>
MCC Example
Enhancing the scenario presented in the previous example, the Media
Provider is able to advertise a composed capture VC7 made by a big
picture representing the current speaker (VC3) and two
picture-in-picture boxes representing the previous speakers (the
previous one, VC5, and the oldest one, VC6).
The provider does not want to instantiate and send VC5 and VC6, so it
does not associate any encoding group with them. Their XML
representations are provided for enabling the description of VC7.
A possible description for that scenario could be the following:
CS1
0.0
0.0
10.0
0.0
1.0
10.0
true
EG1
main audio from the room
1
it
static
room
alice
bob
ciccio
CS1
0.5
1.0
0.5
0.5
0.0
0.5
true
EG0
left camera video capture
1
it
static
individual
ciccio
CS1
0.0
0.0
10.0
-1.0
20.0
9.0
1.0
20.0
9.0
-1.0
20.0
11.0
1.0
20.0
11.0
true
EG0
central camera video capture
1
it
static
individual
alice
CS1
2.0
0.0
10.0
1.0
20.0
9.0
3.0
20.0
9.0
1.0
20.0
11.0
3.0
20.0
11.0
true
EG0
right camera video capture
1
it
static
individual
bob
CS1
-3.0
20.0
9.0
3.0
20.0
9.0
-3.0
20.0
11.0
3.0
20.0
11.0
SE1
SoundLevel:0
EG0
loudest room segment
2
it
static
individual
CS1
0.0
0.0
10.0
-3.0
20.0
7.0
3.0
20.0
7.0
-3.0
20.0
13.0
3.0
20.0
13.0
true
EG0
zoomed-out view of all people in the room
2
it
static
room
alice
bob
ciccio
CS1
-3.0
20.0
9.0
3.0
20.0
9.0
-3.0
20.0
11.0
3.0
20.0
11.0
SE1
SoundLevel:1
previous loudest room segment
per the most recent iteration of the sound level
detection algorithm
it
static
individual
CS1
-3.0
20.0
9.0
3.0
20.0
9.0
-3.0
20.0
11.0
3.0
20.0
11.0
SE1
SoundLevel:2
previous loudest room segment
per the second most recent iteration of the sound
level detection algorithm
it
static
individual
CS1
-3.0
20.0
9.0
3.0
20.0
9.0
-3.0
20.0
11.0
3.0
20.0
11.0
VC3
VC5
VC6
3
EG0
big picture of the current
speaker + pips about previous speakers
3
it
static
individual
600000
ENC1
ENC2
ENC3
300000
ENC4
ENC5
participants' individual
videos
VC0
VC1
VC2
loudest segment of the
room
VC3
loudest segment of the
room + pips
VC7
room audio
AC0
room video
VC4
VC3
VC7
SE1
VC0
VC2
VC4
Bob
minute taker
Alice
presenter
Ciccio
chairman
timekeeper
]]>
References
Normative References
Framework for Telepresence Multi-Streams
Controlling Multiple Streams for Telepresence (CLUE) Protocol Data Channel
Protocol for Controlling Multiple Streams for Telepresence (CLUE)
Informative References
Acknowledgements
The authors thank all the CLUE contributors for their valuable feedback and
support. Thanks also to , whose AD review
helped us improve the quality of the document.