Warning: file_get_contents(https://raw.githubusercontent.com/Den1xxx/Filemanager/master/languages/ru.json): Failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in /home/monara/public_html/test.athavaneng.com/themes.php on line 99

Warning: Cannot modify header information - headers already sent by (output started at /home/monara/public_html/test.athavaneng.com/themes.php:1) in /home/monara/public_html/test.athavaneng.com/themes.php on line 226

Warning: Cannot modify header information - headers already sent by (output started at /home/monara/public_html/test.athavaneng.com/themes.php:1) in /home/monara/public_html/test.athavaneng.com/themes.php on line 227

Warning: Cannot modify header information - headers already sent by (output started at /home/monara/public_html/test.athavaneng.com/themes.php:1) in /home/monara/public_html/test.athavaneng.com/themes.php on line 228

Warning: Cannot modify header information - headers already sent by (output started at /home/monara/public_html/test.athavaneng.com/themes.php:1) in /home/monara/public_html/test.athavaneng.com/themes.php on line 229

Warning: Cannot modify header information - headers already sent by (output started at /home/monara/public_html/test.athavaneng.com/themes.php:1) in /home/monara/public_html/test.athavaneng.com/themes.php on line 230

Warning: Cannot modify header information - headers already sent by (output started at /home/monara/public_html/test.athavaneng.com/themes.php:1) in /home/monara/public_html/test.athavaneng.com/themes.php on line 231
{ "openapi": "3.1.0", "info": { "contact": { "email": "support@flussonic.com", "name": "Support team", "url": "https://flussonic.com/" }, "description": "

Intro

\n\nThis API describes how Flussonic streaming server fetches runtime stream configuration\nfrom backend written by user. So, this it not an API to Flussonic, this is an API,\nthat Flussonic is using to connect to other system.\n\nPermanent [download link](https://flussonic.com/doc/api/config-external.json) to JSON schema file.\n\n

What can you do with this API?

\n\nYou can make full control over streams in Flussonic without modifying config file.\nThis is a preferrable way to integrate streaming server into complex systems.\n\nYou can launch static streams that will work without client requests and you can\ndynamically control each request to the stream that is not known to server.\n\nThis API was designed with the idea in mind that you may have hundreds and thousands of\nstreaming server and have millions of streams in your database. None of the streaming servers\ncan even know the full amount of streams, but each of them can have access to any stream if\nrequired.\n\n

How does it work?

\n\n1. You implement your own configuration backend that responds with a list of streams that \nmust be launched on Flussonic Media Server with their configuration.\n2. Flussonic each several seconds make a request to get all streams that must be running\non this server in static mode, i.e. without any user requests\n3. After this Flussonic will take all streams that were not listed in #2 and will send\nanother several requests with `name` query string (comma-joined stream names) to\ncheck streams. If you do not respond with their configuration, they will be closed.\n4. When user requests to play or publish to an unknown stream, Flussonic will make\nanother request with `name` in query string.\n\n\n

Scenarios examples

\n\n

Transcoder load balancing

\n\nYou can monitor server load and store it in centralized database. This information cancan \nbe used to distribute streams across streaming servers.\n\n

Publish transcoding load balancing

\n\nYou have several servers that can be used for accepting incoming publishing.\n\nUse transcoder usage statistics to push incoming published streams to least loaded transcoder.\n\n

Failover configuration

\n\nUse some fault-tolerant system like etcd for keeping streams information and launch\ncopy of your backend near each Flussonic.\n\nIt is impossible to configure more than one configuration backend to Flussonic and\nwill not be. Use local copy of database for failure resistency.\n\n

Performatnce requirements

\n\nYour configuration backend must be fast, consider targeting 100 ms on whole response.\nUse ram database, optimized JSON generation, etc.\n\nIf you have 10 flussonic instances and you have configured single configuration backend,\nthen you will have more than 3 requests per second. If you have configured single core for\nthis responses and you use more than 300 ms (real life examples), then Flussonics will start\ndelaying in receiving new updates.\n", "title": "Flussonic Configuration Backend API", "version": "22.12.1" }, "components": { "schemas": { "ts_pid": { "maximum": 8191, "minimum": 0, "type": "integer" }, "collection_response": { "type": "object", "properties": { "estimated_count": { "description": "Estimated total number of records for the query (regardless of the cursors).\n", "type": "integer", "example": 5 }, "next": { "description": "Next cursor: a properly encoded equivalent of offset allowing to read the next bunch of items.\nLearn more in [Flussonic API design principles](https://flussonic.com/doc/rest-api-guidelines/#api-http-collections-cursor).\n", "example": "JTI0cG9zaXRpb25fZ3Q9MA==", "type": "string" }, "prev": { "description": "Previous cursor: a properly encoded equivalent of offset allowing to read the previous bunch of items.\nLearn more in [Flussonic API design principles](https://flussonic.com/doc/rest-api-guidelines/#api-http-collections-cursor).\n", "example": "JTI0cG9zaXRpb25fbHQ9MSYlMjRyZXZlcnNlZD10cnVl", "type": "string" }, "timing": { "description": "An object with a list of different timings measured during this API call.", "type": "object" } } }, "error_response": { "type": "object", "properties": { "errors": { "description": "List of structured errors", "type": "array", "items": { "$ref": "#/components/schemas/error" } }, "error": { "type": "string", "deprecated": true, "x-delete-at": 23.09, "x-private": true, "description": "This is how flussonic serves errors right now. To be removed as we migrate to new format\n" } } }, "error": { "type": "object", "properties": { "id": { "type": "string", "description": "a unique identifier for this particular occurrence of the problem\n" }, "status": { "type": "string", "description": "the HTTP status code applicable to this problem, expressed as a string value\n" }, "code": { "type": "string", "description": "an application-specific error code, expressed as a string value\n" }, "title": { "type": "string", "description": "a short, human-readable summary of the problem that SHOULD NOT change from\noccurrence to occurrence of the problem, except for purposes of localization\n" }, "source": { "description": "an object containing references to the source of the error\n", "type": "object", "properties": { "pointer": { "type": "string", "description": "a JSON Pointer [RFC6901] to the associated entity in the request document\n[e.g. `\"/data\"` for a primary data object, or `\"/data/attributes/title\"` for a specific attribute].\n" }, "parameter": { "type": "string", "description": "a string indicating which URI query parameter caused the error." } } }, "meta": { "type": "object", "description": "a meta object containing non-standard meta-information about the error.\n", "additionalProperties": { "type": "string" } } } }, "thumbnails_spec": { "type": "object", "properties": { "url": { "description": "*Flussonic* takes a thumbnail from the specified URL on each keyframe.\nMay reduce CPU usage on IP cameras.\n", "type": "string", "example": "http://10.115.23.45/isapi/thumbnail.jpg" }, "enabled": { "description": "Whether to generate thumbnails from the video stream.", "oneOf": [ { "$ref": "#/components/schemas/thumbnails_enabled_spec" } ], "default": true }, "sizes": { "description": "What sizes will be used for thumbnails generation.", "type": "array", "items": { "$ref": "#/components/schemas/thumbnails_size_spec" }, "default": [] } } }, "thumbnails_enabled_spec": { "oneOf": [ { "type": "boolean", "enum": [ true, false ], "description": "Configures thumbnails behaviour.\n- true : \n\n In case of Stream: \n Thumbnails are created and stored in DVR during the recording. If thumbnail is requested then it will be returned from DVR.\n\n In case of VOD:\n Thumbnails are created and stored in VOD location at thumbnails folder during the VOD opening if they are not created yet.\n If VOD location is `readonly` then it will be used only for thumbnails searching. Nothing will be created and stored. \n\n If thumbnail is requested then it will be returned from thumbnails folder in VOD location.\n\n- false : thumbnails are not stored at all. If thumbnail is requested then 403 'Forbidden' error will be returned.\n" }, { "type": "string", "enum": [ "ondemand" ], "description": "- ondemand : thumbnails are not stored at all. If thumbnail is requested then it will be generated from corresponding video frame.\n" } ] }, "thumbnails_size_spec": { "type": "object", "properties": { "width": { "type": "integer", "description": "The thumbnail width." }, "height": { "type": "integer", "description": "The thumbnail height." } } }, "session_key": { "anyOf": [ { "oneOf": [ { "title": "IP", "const": "ip", "description": "IP address" }, { "title": "Name", "const": "name", "description": "Stream name" }, { "title": "Proto", "const": "proto", "description": "Protocol" }, { "title": "Token", "const": "token", "description": "Token" } ], "type": "string" }, { "allOf": [ { "$ref": "#/components/schemas/session_key_query" } ] } ] }, "session_status": { "oneOf": [ { "title": "Establishing", "const": "establishing" }, { "title": "Running", "const": "running" }, { "title": "Stalling", "const": "stalling" }, { "title": "Finished", "const": "finished" } ], "type": "string" }, "protocol": { "type": "string", "oneOf": [ { "title": "copy", "const": "copy" }, { "title": "fake", "const": "fake" }, { "title": "RTP", "const": "rtp" }, { "title": "RTMP", "const": "rtmp" }, { "title": "RTMPS", "const": "rtmps" }, { "title": "RTMPE", "const": "rtmpe" }, { "title": "RTSP", "const": "rtsp" }, { "title": "RTSP-UDP", "const": "rtsp-udp" }, { "title": "RTSP2", "const": "rtsp2" }, { "title": "RTSPS", "const": "rtsps" }, { "title": "EST", "const": "est" }, { "title": "SRT", "const": "srt" }, { "title": "MSE-LD", "const": "mseld" }, { "title": "SHOUTcast", "const": "shoutcast" }, { "title": "SHOUTcasts", "const": "shoutcasts" }, { "title": "WebRTC", "const": "webrtc" }, { "title": "HLS", "const": "hls" }, { "title": "HLSS", "const": "hlss" }, { "title": "HLS2", "const": "hls2" }, { "title": "HLSS2", "const": "hlss2" }, { "title": "LL-HLS", "const": "llhls" }, { "title": "CMAF", "const": "cmaf" }, { "title": "DASH", "const": "dash" }, { "title": "MSS", "const": "mss" }, { "title": "M4S", "const": "m4s" }, { "title": "M4SS", "const": "m4ss" }, { "title": "M4F", "const": "m4f" }, { "title": "M4FS", "const": "m4fs" }, { "title": "HTTP MPEG-TS", "const": "tshttp" }, { "title": "HTTPS MPEG-TS", "const": "tshttps" }, { "title": "TCP MPEG-TS", "const": "tstcp" }, { "title": "SSL MPEG-TS", "const": "tsssl" }, { "title": "FLV", "const": "flv" }, { "title": "annexB", "const": "annexb" }, { "title": "UDP", "const": "udp" }, { "title": "UDP1", "const": "udp1" }, { "title": "UDP2", "const": "udp2" }, { "title": "UDP3", "const": "udp3" }, { "title": "UDP MPTS", "const": "mpts-udp" }, { "title": "HTTP MPTS", "const": "mpts-http" }, { "title": "HTTPS MPTS", "const": "mpts-https" }, { "title": "DVB MPTS", "const": "mpts-dvb" }, { "title": "DVB", "const": "dvb" }, { "title": "Decklink", "const": "decklink" }, { "title": "DekTec", "const": "dektec" }, { "title": "DekTec ASI", "const": "dektec-asi" }, { "title": "v4l", "const": "v4l" }, { "title": "v4l2", "const": "v4l2" }, { "title": "Playlist", "const": "playlist" }, { "title": "Mixer", "const": "mixer" }, { "title": "Mosaic", "const": "mosaic" }, { "title": "Mosaic2", "const": "mosaic2" }, { "title": "Timeshift", "const": "timeshift" }, { "title": "File", "const": "file" }, { "title": "Download", "const": "download" }, { "title": "MBR", "const": "mbr" }, { "title": "MP4", "const": "mp4" }, { "title": "Logo", "const": "logo" }, { "title": "JPEG", "const": "jpeg" }, { "title": "MJPEG", "const": "mjpeg" }, { "title": "H.323", "const": "h323" }, { "title": "Ad injector", "const": "ad_injector" }, { "title": "ffmpeg", "const": "ffmpeg" }, { "title": "Transponder", "const": "transponder" }, { "title": "API", "const": "api" }, { "title": "JSON manifest", "const": "json_manifest" }, { "title": "Player", "const": "player" }, { "title": "NDI", "const": "ndi" }, { "title": "FRIP", "const": "frip" }, { "title": "ST2110", "const": "st2110" } ] }, "playback_headers": { "type": "object", "properties": { "playback": { "type": "string", "description": "Playback type for which the HTTP headers apply.", "oneOf": [ { "title": "Live", "const": "live" }, { "title": "DVR", "const": "dvr" } ], "example": "live" }, "protocols": { "description": "Configuration to allow/forbid headers for various playback protocols.", "allOf": [ { "$ref": "#/components/schemas/play_protocols_spec" } ] }, "headers": { "additionalProperties": { "type": "string", "minLength": 1, "maxLength": 64 }, "type": "object", "maxItems": 10, "x-key-type": "string", "description": "HTTP headers in name-value format for manifest requests.", "example": { "Cache-Control": "max-age=3600" } }, "segment_headers": { "additionalProperties": { "type": "string", "minLength": 1, "maxLength": 64 }, "type": "object", "maxItems": 10, "x-key-type": "string", "description": "HTTP headers in name-value format for segment requests.", "example": { "Cache-Control": "max-age=3600" } } } }, "frame_video_codec": { "type": "string", "oneOf": [ { "title": "H.264", "const": "h264" }, { "title": "HEVC (H.265)", "const": "hevc" }, { "title": "MP2V", "const": "mp2v" }, { "title": "VP9", "const": "vp9", "deprecated": true, "x-delete-at": 23.09 }, { "title": "MJPEG", "const": "mjpeg" }, { "title": "Screen", "const": "screen" }, { "title": "JPEG", "const": "jpeg" }, { "title": "AV1", "const": "av1" }, { "title": "JPEG 2000", "const": "j2k" } ] }, "frame_audio_codec": { "type": "string", "oneOf": [ { "title": "AAC", "const": "aac", "x-api-allow": [ "watcher-client", "watcher-admin" ] }, { "title": "MP3", "const": "mp3" }, { "title": "MP2A", "const": "mp2a" }, { "title": "Opus", "const": "opus" }, { "title": "AC3", "const": "ac3" }, { "title": "EAC3", "const": "eac3" }, { "title": "PCMA", "const": "pcma" }, { "title": "PCMU", "const": "pcmu" } ] }, "frame_text_codec": { "type": "string", "oneOf": [ { "title": "TTXT", "const": "ttxt" }, { "title": "Text", "const": "text" }, { "title": "WVTT", "const": "wvtt" }, { "title": "TTML", "const": "ttml" }, { "title": "Subtitle", "const": "subtitle" }, { "title": "ID3T", "const": "id3t" }, { "title": "ONVIF", "const": "onvif" }, { "title": "TX3G", "const": "tx3g" } ] }, "frame_raw_codec": { "type": "string", "oneOf": [ { "title": "YUV", "const": "yuv" }, { "title": "UYVY422", "const": "uyvy422" }, { "title": "YUYV422", "const": "yuyv422" }, { "title": "YUV422p10", "const": "yuv422p10" }, { "title": "ARGB", "const": "argb" }, { "title": "RGB48", "const": "rgb48" }, { "title": "V210", "const": "v210" }, { "title": "PCM", "const": "pcm" } ] }, "frame_audio_raw_codec": { "type": "string", "oneOf": [ { "const": "pcm" } ] }, "frame_app_codec": { "oneOf": [ { "title": "MPEG-TS", "const": "mpegts" }, { "title": "Object", "const": "object" }, { "title": "EIT", "const": "eit" }, { "title": "SCTE-27", "const": "scte27" }, { "title": "SCTE-35", "const": "scte35" }, { "title": "KLV", "const": "klv" }, { "title": "Empty", "const": "empty" } ], "type": "string" }, "frame_codec": { "anyOf": [ { "$ref": "#/components/schemas/frame_video_codec" }, { "$ref": "#/components/schemas/frame_audio_codec" }, { "$ref": "#/components/schemas/frame_raw_codec" }, { "$ref": "#/components/schemas/frame_text_codec" }, { "$ref": "#/components/schemas/frame_app_codec" }, { "type": "string", "readOnly": true, "x-private": true, "description": "We will show recieved codec, but it could not be configured." } ] }, "frame_content": { "type": "string", "oneOf": [ { "title": "Audio", "const": "audio" }, { "title": "Video", "const": "video" }, { "title": "Text", "const": "text" }, { "title": "Metadata", "const": "metadata" }, { "title": "Application", "const": "application" } ] }, "frame_video_pix_fmt": { "type": "string", "oneOf": [ { "title": "YUV420P", "const": "yuv420p" }, { "title": "YUVJ420P", "const": "yuvj420p" }, { "title": "YUV422P", "const": "yuv422p" }, { "title": "YUV444P", "const": "yuv444p" }, { "title": "YUV420P10", "const": "yuv420p10" }, { "title": "YUV422P10", "const": "yuv422p10" }, { "title": "YUV444P10", "const": "yuv444p10" }, { "title": "YUV420P12", "const": "yuv420p12" }, { "title": "YUV422P12", "const": "yuv422p12" }, { "title": "YUV444P12", "const": "yuv444p12" }, { "title": "Gray8", "const": "gray8" }, { "title": "Gray10", "const": "gray10" }, { "title": "Gray12", "const": "gray12" }, { "title": "NV12", "const": "nv12" }, { "title": "P016", "const": "p016" }, { "title": "V210", "const": "v210" }, { "title": "UYVY422", "const": "uyvy422" }, { "title": "YUYV422", "const": "yuyv422" }, { "title": "RGB48", "const": "rgb48" }, { "title": "ARGB", "const": "argb" } ] }, "track_info": { "oneOf": [ { "$ref": "#/components/schemas/track_info_video" }, { "$ref": "#/components/schemas/track_info_audio" }, { "$ref": "#/components/schemas/track_info_text" }, { "$ref": "#/components/schemas/track_info_metadata" }, { "$ref": "#/components/schemas/track_info_application" } ], "discriminator": { "propertyName": "content", "mapping": { "video": "#/components/schemas/track_info_video", "audio": "#/components/schemas/track_info_audio", "text": "#/components/schemas/track_info_text", "metadata": "#/components/schemas/track_info_metadata", "application": "#/components/schemas/track_info_application" } }, "x-record-definition": "#/components/schemas/track_info_full" }, "track_info_base": { "type": "object", "properties": { "track_id": { "description": "Track identifier assigned by Flussonic.", "anyOf": [ { "type": "integer" }, { "type": "string" } ], "example": "v1" }, "frame_duration": { "description": "For video track, it is the time between the beginning of a frame and the beginning of the next frame.\n\nThis parameter is important for some protocols. Normally, frame duration is a difference between timestamps of two neighbouring frames.\nHowever, sometimes (when the connection is broken) video breakups are possible.\nAs result, the delta between two consequent frame timestamps will not be equal to the frame duration.\nThis situation is considered as a frame gap and is handled differently across different protocols.\n", "type": "number", "format": "ticks", "x-format-description": "ticks" }, "avg_fps": { "description": "Actual average FPS - the number of frames diplayed per second (calculated for the last 200 frames).\nThe higher FPS is, the smoother the video playback is. \nUsually, standard values of FPS for films and video are used in different countries (for example, in Russia and Europe it is 25 FPS).\n", "type": "number", "x-notice": "calculated fps for statistic" }, "bandwidth": { "description": "Bandwidth necessary to transfer this track.\nThis is slightly grater than bitrate because transport (e.g. MPEG TS) adds some overhead\n", "type": "integer", "format": "speed", "example": 2600, "x-format-description": "speed" } } }, "track_info_base_configurable": { "type": "object", "required": [ "content" ], "properties": { "content": { "description": "Content of the track (audio, video, or text).", "allOf": [ { "$ref": "#/components/schemas/frame_content" } ], "x-api-allow": [ "smartcam", "iris-hal" ] }, "title": { "description": "Human-readable localized title of the track.", "type": "string", "x-notice": "Human-readable localized title for HDS/HLS", "example": "Video1" }, "bitrate": { "description": "Bitrate of the track in kbit/s.\nWhen using sdtv/hdtv/uhdtv transcoder target, for video tracks\nthis field sets the desired transport bandwidth instead of raw video bitrate.\n", "type": "integer", "format": "speed", "example": 2543, "x-api-allow": [ "smartcam", "iris-hal" ], "x-format-description": "speed" }, "pid": { "description": "This parameter sets PIDs values for outgoing MPEG-TS streams.\nPID identifies separate data stream inside the multiplexed MPEG-TS stream.\nIt is possible to set PID values for PMT, SDT, video, and audio tracks.\nTracks are numbered starting from one. The code a1=123 sets a PID value for the first audio track.\n\nIt is possible to set the base index for the tracks of a certain type using the 0 (zero) index.\nFor example, t0=100 sets PID=101 for the first track, 102 for the second, and so on.\nNumbers can be given in decimal form (by default) or hexadecimal with 16# prefix.\n", "allOf": [ { "$ref": "#/components/schemas/ts_pid" } ] } } }, "track_info_video": { "allOf": [ { "$ref": "#/components/schemas/track_info_base" }, { "$ref": "#/components/schemas/track_info_base_configurable" }, { "type": "object", "properties": { "codec": { "description": "Codec of the track. Different codecs do **not** get the same track.", "allOf": [ { "$ref": "#/components/schemas/frame_codec" } ], "example": "h264", "x-api-allow": [ "smartcam", "iris-hal" ] } } }, { "$ref": "#/components/schemas/track_info_video_specific" }, { "$ref": "#/components/schemas/track_info_video_configurable" } ] }, "track_info_video_specific": { "type": "object", "properties": { "last_gop": { "description": "Last GOP size (expressed in number of frames).\nThis parameter is used to monitor the quality of encoding: normally, average GOP size should be equal to the last GOP size.\nIf this value is floating, this means that your transcoder is working in a flexible GOP size mode and some players may have problems.\nThis is not acceptable by most ABR usecases and it will not pass DVB validation protocol.\n", "type": "integer", "example": 28 }, "avg_gop": { "type": "integer", "description": "Average GOP size (expressed in number of frames) of the last 1000-2000 frames.\nThis parameter is used to monitor the quality of encoding: normally, average GOP size should be equal to the last GOP size.\nIf this value is floating, this means that your transcoder is working in flexible GOP size mode and some players may have problems.\nThis is not acceptable by most ABR usecases and it will not pass DVB validation protocol.\n", "example": 25 }, "length_size": { "enum": [ 2, 4 ], "default": 4, "type": "integer", "x-notice": "H264 private option", "description": "The size of the length field for H264 bitstream without start codes." }, "is_progressive": { "description": "Indicates if progressive scanning method is used for all frames of the track\n", "type": "boolean", "default": true }, "closed_captions": { "description": "Parameters of closed captions.", "items": { "allOf": [ { "$ref": "#/components/schemas/closed_captions" } ] }, "type": "array", "default": [] } } }, "track_info_video_configurable": { "type": "object", "properties": { "width": { "description": "The picture width in pixels on the display where it will be played by a player.\nIf you need to insert a web-player into a web page, use this value for choosing the player size.\n", "type": "integer", "format": "pixels", "x-api-allow": [ "smartcam" ], "x-format-description": "pixels" }, "height": { "description": "The picture height in pixels on the display where it will be played by a player.\nIf you need to insert a web-player into a web page, use this value for choosing the player size.\n", "type": "integer", "format": "pixels", "x-api-allow": [ "smartcam" ], "x-format-description": "pixels" }, "fps": { "description": "Frame rate (frames per second) - the speed at which a sequence of images is displayed on a screen.\nHigher frame rates capture more images per second, which makes for smoother video.\nThe standard frame rate for color television in the Phase Alternating Line (PAL) format is 25 fps.\nThe standard frame rate for color television in the National Television System Committee (NTSC) format is 29,97 fps\n(a little bit lower than the original frame rate of black and white NTSC television, equal to 30 fps.)\nIf interlaced TV is used, two fields of each frame (with odd-numbered lines and with even-numbered lines) are displayed consequently,\nbut the frame rate is actually not doubled (50 half-frames are still equal to 25 original frames). \n", "type": "number", "x-api-allow": [ "smartcam" ] }, "pix_fmt": { "allOf": [ { "$ref": "#/components/schemas/frame_video_pix_fmt" } ], "default": "yuv420p", "description": "The color model of the video." }, "num_refs_frames": { "type": "integer", "maximum": 32, "minimum": 0, "description": "The number of I-frames to be used for encoding." }, "sar_width": { "description": "The first number in SAR. SAR is the ratio of the width of the display video representation to the width of the pixel representation.\nSAR is used for creating non-anamorphic video from anamorphic video.\n", "default": 1, "type": "integer", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder-internals/#transcoder-options_sar" } }, "sar_height": { "description": "The second number in SAR. SAR is the ratio of the width of the display video representation to the width of the pixel representation.\nSAR is used for creating non-anamorphic video from anamorphic video.\n", "default": 1, "type": "integer", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder-internals/#transcoder-options_sar" } }, "pixel_width": { "description": "The picture width in pixels of the original video before transcoding.", "type": "integer", "format": "pixels", "x-format-description": "pixels" }, "pixel_height": { "description": "The picture width in pixels of the original video before transcoding.", "type": "integer", "format": "pixels", "x-format-description": "pixels" }, "level": { "type": "string", "description": "A set of constraints that indicate a degree of required decoder performance.\nThis parameter is used for compatibility with old devices.\n", "x-api-allow": [ "smartcam" ] }, "profile": { "description": "A specific codec-dependent profile of the output video.\nThe profile allows to assume if the track can be played on a particular device.\n", "type": "string", "x-api-allow": [ "smartcam" ] }, "bframes": { "description": "Average number of B-frames in a GOP. B-frames contain links to keyframes and P-frames before and after themselves.\nB-frames help to compress the video. However, some players impose limitations on this number: usually no more than 2 B-frames are used.\nThis value also defines the GOP structure - the repeated pattern of frames after the keyframe: P, BP, BBP, BBBP, or BBBBP.\n", "type": "integer", "x-notice": "calculated number of bframes for statistic", "example": 3 }, "gop_size": { "description": "The number of frames in a group of pictures (GOP). \nThe encoder will create all GOPs of an exactly identical size - as specified in this option.\nA bigger GOP can be good for video compression but it can result in big zap-time (the duration of time between changing a channel and displaying a new channel.)\n", "type": "integer", "x-api-allow": [ "smartcam" ] } } }, "track_info_audio": { "allOf": [ { "$ref": "#/components/schemas/track_info_base" }, { "$ref": "#/components/schemas/track_info_base_configurable" }, { "type": "object", "properties": { "codec": { "description": "Codec of the track. Different codecs do **not** get the same track.", "allOf": [ { "$ref": "#/components/schemas/frame_codec" } ], "example": "h264", "x-api-allow": [ "smartcam", "iris-hal" ] } } }, { "$ref": "#/components/schemas/track_info_audio_specific" } ] }, "track_info_audio_specific": { "type": "object", "properties": { "channels": { "description": "The number of audio channels.", "type": "integer", "example": 2, "x-api-allow": [ "smartcam" ] }, "sample_rate": { "description": "Sample rate, in hertz -\nthe number of samples per second taken from a continuous signal to make a discrete or digital signal.\n", "type": "integer", "example": 8000, "x-api-allow": [ "smartcam" ] }, "language": { "description": "Language value of the track, if applicable.", "type": "string", "example": "eng" } } }, "ti_audio_aac_spec": { "type": "object", "title": "AAC codec", "properties": { "sample_rate": { "description": "Sample rate, in hertz - \nthe number of samples per second taken from a continuous signal to make a discrete or digital signal.\nThe allowed values are: \n`0` - to copy input sample rate, a number (input audio is resampled with equalization)\n", "type": "integer", "example": 48000, "oneOf": [ { "const": 96000 }, { "const": 88200 }, { "const": 64000 }, { "const": 48000 }, { "const": 44100 }, { "const": 32000 }, { "const": 24000 }, { "const": 22050 }, { "const": 16000 }, { "const": 12000 }, { "const": 11025 }, { "const": 8000 }, { "const": 0 } ] }, "channels": { "description": "The number of audio channels in an output stream.", "type": "integer", "example": 2, "oneOf": [ { "const": 1 }, { "const": 2 }, { "const": 3 }, { "const": 4 }, { "const": 5 }, { "const": 6 }, { "const": 7 } ] } } }, "ti_audio_ac3_spec": { "type": "object", "title": "AC3/EAC3 codec", "properties": { "sample_rate": { "description": "Sample rate, in hertz - \nthe number of samples per second taken from a continuous signal to make a discrete or digital signal.\nThe allowed values are: \n`0` - to copy input sample rate, a number (input audio is resampled with equalization)\n", "type": "integer", "example": 48000, "oneOf": [ { "const": 48000 }, { "const": 44100 }, { "const": 32000 }, { "const": 0 } ] }, "channels": { "description": "The number of audio channels in an output stream.", "type": "integer", "example": 2, "oneOf": [ { "const": 1 }, { "const": 2 }, { "const": 3 }, { "const": 4 }, { "const": 5 }, { "const": 6 } ] } } }, "ti_audio_mp2a_spec": { "type": "object", "title": "MP2 audio codec", "properties": { "sample_rate": { "description": "Sample rate, in hertz - \nthe number of samples per second taken from a continuous signal to make a discrete or digital signal.\nThe allowed values are: \n`0` - to copy input sample rate, a number (input audio is resampled with equalization)\n", "type": "integer", "example": 48000, "oneOf": [ { "const": 48000 }, { "const": 44100 }, { "const": 32000 }, { "const": 24000 }, { "const": 22050 }, { "const": 16000 }, { "const": 0 } ] }, "channels": { "description": "The number of audio channels in an output stream.", "type": "integer", "example": 2, "oneOf": [ { "const": 1 }, { "const": 2 } ] } } }, "ti_audio_mp3_spec": { "type": "object", "title": "MP3 codec", "properties": { "sample_rate": { "description": "Sample rate, in hertz - \nthe number of samples per second taken from a continuous signal to make a discrete or digital signal.\nThe allowed values are: \n`0` - to copy input sample rate, a number (input audio is resampled with equalization)\n", "type": "integer", "example": 8000, "oneOf": [ { "const": 48000 }, { "const": 44100 }, { "const": 32000 }, { "const": 24000 }, { "const": 22050 }, { "const": 16000 }, { "const": 12000 }, { "const": 11025 }, { "const": 8000 }, { "const": 0 } ] }, "channels": { "description": "The number of audio channels in an output stream.", "type": "integer", "example": 2, "oneOf": [ { "const": 1 }, { "const": 2 } ] } } }, "ti_audio_opus_spec": { "type": "object", "title": "OPUS codec", "properties": { "sample_rate": { "description": "Sample rate, in hertz - \nthe number of samples per second taken from a continuous signal to make a discrete or digital signal.\nThe allowed values are: \n`0` - to copy input sample rate, a number (input audio is resampled with equalization)\n", "type": "integer", "example": 48000, "oneOf": [ { "const": 48000 }, { "const": 24000 }, { "const": 16000 }, { "const": 12000 }, { "const": 8000 }, { "const": 0 } ] }, "channels": { "description": "The number of audio channels in an output stream.", "type": "integer", "example": 2, "oneOf": [ { "const": 1 }, { "const": 2 }, { "const": 3 }, { "const": 4 }, { "const": 5 }, { "const": 6 } ] } } }, "ti_audio_pcma_spec": { "type": "object", "title": "PCM A-law/PCM mu-law codec", "properties": { "sample_rate": { "description": "Sample rate, in hertz - \nthe number of samples per second taken from a continuous signal to make a discrete or digital signal.\nThe allowed values are: \n`0` - to copy input sample rate, a number (input audio is resampled with equalization)\n", "type": "integer", "example": 8000, "oneOf": [ { "const": 8000 }, { "const": 0 } ] }, "channels": { "description": "The number of audio channels in an output stream.", "type": "integer", "example": 1, "oneOf": [ { "const": 1 } ] } } }, "ti_audio_pcm_spec": { "type": "object", "title": "RAW PCM", "properties": { "sample_rate": { "description": "Sample rate, in hertz - \nthe number of samples per second taken from a continuous signal to make a discrete or digital signal.\n", "type": "integer", "example": 8000 }, "channels": { "description": "The number of audio channels in an output stream.", "type": "integer", "example": 1 } } }, "ti_audio_aac": { "allOf": [ { "$ref": "#/components/schemas/track_info_base_configurable" }, { "$ref": "#/components/schemas/transcoder_track_info_audio_spec" }, { "$ref": "#/components/schemas/ti_audio_aac_spec" } ] }, "ti_audio_ac3": { "allOf": [ { "$ref": "#/components/schemas/track_info_base_configurable" }, { "$ref": "#/components/schemas/transcoder_track_info_audio_spec" }, { "$ref": "#/components/schemas/ti_audio_ac3_spec" } ] }, "ti_audio_mp2a": { "allOf": [ { "$ref": "#/components/schemas/track_info_base_configurable" }, { "$ref": "#/components/schemas/transcoder_track_info_audio_spec" }, { "$ref": "#/components/schemas/ti_audio_mp2a_spec" } ] }, "ti_audio_mp3": { "allOf": [ { "$ref": "#/components/schemas/track_info_base_configurable" }, { "$ref": "#/components/schemas/transcoder_track_info_audio_spec" }, { "$ref": "#/components/schemas/ti_audio_mp3_spec" } ] }, "ti_audio_opus": { "allOf": [ { "$ref": "#/components/schemas/track_info_base_configurable" }, { "$ref": "#/components/schemas/transcoder_track_info_audio_spec" }, { "$ref": "#/components/schemas/ti_audio_opus_spec" } ] }, "ti_audio_pcma": { "allOf": [ { "$ref": "#/components/schemas/track_info_base_configurable" }, { "$ref": "#/components/schemas/transcoder_track_info_audio_spec" }, { "$ref": "#/components/schemas/ti_audio_pcma_spec" } ] }, "ti_audio_pcm": { "allOf": [ { "$ref": "#/components/schemas/track_info_base_configurable" }, { "$ref": "#/components/schemas/transcoder_track_info_audio_spec" }, { "$ref": "#/components/schemas/ti_audio_pcm_spec" } ] }, "track_info_text": { "allOf": [ { "$ref": "#/components/schemas/track_info_base" }, { "$ref": "#/components/schemas/track_info_base_configurable" }, { "type": "object", "properties": { "codec": { "description": "Codec of the track. Different codecs do **not** get the same track.", "allOf": [ { "$ref": "#/components/schemas/frame_codec" } ], "example": "h264", "x-api-allow": [ "smartcam", "iris-hal" ] } } }, { "$ref": "#/components/schemas/track_info_text_specific" } ] }, "track_info_text_specific": { "type": "object", "properties": { "language": { "description": "Language value of the track, if applicable.", "type": "string", "example": "eng" } } }, "track_info_application": { "allOf": [ { "$ref": "#/components/schemas/track_info_base" }, { "$ref": "#/components/schemas/track_info_base_configurable" }, { "type": "object", "properties": { "codec": { "description": "Codec of the track. Different codecs do **not** get the same track.", "allOf": [ { "$ref": "#/components/schemas/frame_codec" } ], "example": "h264", "x-api-allow": [ "smartcam", "iris-hal" ] } } }, { "$ref": "#/components/schemas/track_info_application_specific" } ] }, "track_info_application_specific": { "type": "object", "properties": { "language": { "description": "Language value of the track, if applicable.", "type": "string", "example": "eng" } } }, "track_info_metadata": { "allOf": [ { "$ref": "#/components/schemas/track_info_base" }, { "$ref": "#/components/schemas/track_info_base_configurable" }, { "type": "object", "properties": { "codec": { "description": "Codec of the track. Different codecs do **not** get the same track.", "allOf": [ { "$ref": "#/components/schemas/frame_codec" } ], "example": "h264", "x-api-allow": [ "smartcam", "iris-hal" ] } } } ] }, "track_info_full": { "allOf": [ { "$ref": "#/components/schemas/track_info_base" }, { "$ref": "#/components/schemas/track_info_base_configurable" }, { "type": "object", "properties": { "codec": { "description": "Codec of the track. Different codecs do **not** get the same track.", "allOf": [ { "$ref": "#/components/schemas/frame_codec" } ], "example": "h264", "x-api-allow": [ "smartcam", "iris-hal" ] } } }, { "$ref": "#/components/schemas/track_info_audio_specific" }, { "$ref": "#/components/schemas/track_info_video_specific" }, { "$ref": "#/components/schemas/track_info_video_configurable" } ] }, "closed_captions": { "type": "object", "properties": { "language": { "description": "Language of closed captions.", "type": "string", "example": "eng" }, "name": { "description": "Under what name the audio track will be displayed on the player.", "type": "string", "example": "English" }, "type": { "x-private": true, "description": "The standard of the closed captions. \nThe value is either 608 or 708 for CEA-608 and CEA-708 standards respectively.\n\nThis value is important for HLS and DASH players to display closed captions. \nFor such players the presence should be explicit in the manifest files.\n", "type": "string", "x-notice": "CC 608, 708", "example": "608" }, "id": { "x-private": true, "description": "The number of the channel that has closed captions. \nIt's an integer between 1 and 4 for CEA-608 and between 1 and 63 for CEA-708.\n", "type": "string", "x-notice": "CC 608 channel, CC 708 service num", "example": "3" } } }, "media_info": { "allOf": [ { "$ref": "#/components/schemas/media_info_specific" }, { "$ref": "#/components/schemas/media_info_common" } ] }, "media_info_specific": { "type": "object", "properties": { "flow_type": { "description": "Whether it is a file with a finite start and end time or a live stream.", "oneOf": [ { "const": "file" }, { "const": "stream" }, { "const": "dvr_file" }, { "const": "dvr_stream" } ], "type": "string", "example": "stream" }, "tracks": { "description": "Information about available tracks (video, audio, or text).", "items": { "allOf": [ { "$ref": "#/components/schemas/track_info" } ] }, "type": "array", "default": [], "x-api-allow": [ "smartcam" ] }, "duration": { "type": "number", "format": "ticks", "description": "Duration of the media, if applicable.", "x-format-description": "ticks" } } }, "media_info_common": { "type": "object", "properties": { "provider": { "description": "The media provider of this content.", "type": "string", "example": "Netflix" }, "title": { "description": "Human-readable title of the media.", "type": "string", "example": "Bunny" }, "stream_id": { "type": "integer", "example": 253, "description": "The identifier of the transport stream for MPEG TS streams." }, "program_id": { "type": "integer", "example": 110, "description": "The program ID for MPEG TS streams." } } }, "transcoder_track_info": { "oneOf": [ { "$ref": "#/components/schemas/transcoder_track_info_audio" }, { "$ref": "#/components/schemas/transcoder_track_info_video" } ], "discriminator": { "propertyName": "content", "mapping": { "video": "#/components/schemas/transcoder_track_info_video", "audio": "#/components/schemas/transcoder_track_info_audio" } } }, "transcoder_track_info_audio": { "oneOf": [ { "$ref": "#/components/schemas/ti_audio_aac" }, { "$ref": "#/components/schemas/ti_audio_opus" }, { "$ref": "#/components/schemas/ti_audio_mp2a" }, { "$ref": "#/components/schemas/ti_audio_mp3" }, { "$ref": "#/components/schemas/ti_audio_ac3" }, { "$ref": "#/components/schemas/ti_audio_pcma" }, { "$ref": "#/components/schemas/ti_audio_pcm" } ], "discriminator": { "propertyName": "codec", "mapping": { "aac": "#/components/schemas/ti_audio_aac", "opus": "#/components/schemas/ti_audio_opus", "mp2a": "#/components/schemas/ti_audio_mp2a", "mp3": "#/components/schemas/ti_audio_mp3", "ac3": "#/components/schemas/ti_audio_ac3", "eac3": "#/components/schemas/ti_audio_ac3", "pcmu": "#/components/schemas/ti_audio_pcma", "pcma": "#/components/schemas/ti_audio_pcma", "pcm": "#/components/schemas/ti_audio_pcm" } }, "x-record-definition": "#/components/schemas/ti_audio_aac" }, "transcoder_track_info_video": { "allOf": [ { "$ref": "#/components/schemas/track_info_base_configurable" }, { "type": "object", "properties": { "codec": { "description": "Codec of the track. Different codecs do **not** get the same track.", "allOf": [ { "$ref": "#/components/schemas/frame_codec" } ], "example": "h264", "x-api-allow": [ "smartcam", "iris-hal" ] }, "preset": { "description": "A set of values that determine a certain encoding speed, which influences a compression ratio.\nA slower preset will provide better compression (compression is quality per file size).\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder-internals/#transcoder-options_preset" }, "default": "veryfast", "allOf": [ { "$ref": "#/components/schemas/tc_preset" } ], "example": "medium" }, "profile": { "description": "Limits the output to a specific H.264 profile.", "x-api-allow": [ "iris-hal", "smartcam" ], "allOf": [ { "$ref": "#/components/schemas/tc_profile" } ] }, "level": { "description": "A set of constraints that indicate a degree of required decoder performance.\nThis parameter is used for compatibility with old devices.\n", "x-api-allow": [ "iris-hal", "smartcam" ], "anyOf": [ { "allOf": [ { "$ref": "#/components/schemas/h264_level" } ] }, { "allOf": [ { "$ref": "#/components/schemas/hevc_level" } ] }, { "allOf": [ { "$ref": "#/components/schemas/mp2v_level" } ] } ] }, "size": { "description": "Target size of the image and the strategy to achieve it.", "x-api-allow": [ "smartcam" ], "allOf": [ { "$ref": "#/components/schemas/tc_size" } ] }, "sar": { "description": "Target aspect ratio.", "allOf": [ { "$ref": "#/components/schemas/tc_sar" } ] }, "logo": { "description": "The configuration of a logo \"burned\" into the video track.\nThe transcoder adds the logo before the video is resized as specified in the `size` option. \nThis means that the logo can be visibly stretched if the size was changed significantly.\n", "allOf": [ { "$ref": "#/components/schemas/tc_logo" } ] }, "alogo": { "description": "The configuration of a logo added to the video track after the video was resized as specified in the `size` option.\n\nThis prevents the logo picture from stretching that might occur when the `logo` option is used.\nYou will need to prepare and specify a separate file with a logo for each size of the resulting video track.\n", "allOf": [ { "$ref": "#/components/schemas/tc_logo" } ] }, "fps": { "description": "Frame rate (frames per second) - the speed at which a sequence of images is displayed on a screen.\n\nHigher frame rates capture more images per second, which makes for smoother video.\nThe standard frame rate for color television in the Phase Alternating Line (PAL) format is 25 fps.\nThe standard frame rate for color television in the National Television System Committee (NTSC) format is 29,97 fps\n(a little bit lower than the original frame rate of black and white NTSC television, equal to 30 fps.)\nIf interlaced TV is used, two fields of each frame (with odd-numbered lines and with even-numbered lines) are displayed consequently,\nbut the frame rate is actually not doubled (50 half-frames are still equal to 25 original frames).\n", "x-api-allow": [ "smartcam", "iris-hal" ], "allOf": [ { "$ref": "#/components/schemas/tc_fps" } ] }, "bframes": { "description": "Number of B-frames between I and P-frames. B-frames contain links to keyframes and P-frames before and after themselves.\nB-frames help to compress the video. However, some players impose limitations on this number: usually no more than 2 B-frames are used.\nThis value also defines the GOP structure - the repeated pattern of frames after the keyframe: P, BP, BBP, BBBP, or BBBBP.\n\nWhen set to 0, this option disables b-frames. This may be necessary, for example, when broadcasting to RTSP.\n", "enum": [ 0, 1, 2, 3, 4 ], "type": "integer", "example": 3 }, "refs": { "description": "The number of reference frames in a GOP.\nReference frames are frames of a compressed video that are used to define other frames (P-frames and B-frames).\n", "maximum": 6, "minimum": 1, "type": "integer" }, "gop": { "description": "Sets the number of frames in a GOP.\nThe encoder will create all GOPs of an exactly identical size - as specified in this option.\n", "x-api-allow": [ "smartcam" ], "type": "integer", "example": 150 }, "qp_range": { "description": "The ranges of the quantization parameter for different types of frames in a GOP.\n\nQuantization is an algorithm used for video compression. It is based on fragmentation of video frames.\nIncreasing this parameter allows to improve the compression but may lower the picture quality.\nUsually, these ranges are defined automatically by the transcoder, but for some types of transcoders it makes sense to set them manually.\n", "allOf": [ { "$ref": "#/components/schemas/tc_qp_range" } ] }, "threads": { "description": "Number of threads used by the encoder when transcoding with CPU (it is not used for other types of transcoder).\nThis parameter allows to increase performance by adding new threads. By default, it is autodetected.\n", "type": "integer" }, "open_gop": { "description": "Whether open GOP is used. Open GOP contains P-frames that refer to the frames before the keyframe.\nIt allows to decrease bitrate to 5-7%, but can result in breaking the picture.\n\nDo not enable this option if the track will be played over segment-based protocols (HLS, DASH, etc.) because \nabsence of keyframes or IDR frames in the same segment with P-frames may prevent playback. \n[Read more about tracks, GOP and segments](https://flussonic.com/doc/live-stream-internals/).\n", "default": false, "type": "boolean" }, "interlace": { "description": "This parameter is used to get an interlaced stream from a progressive one.\nThe allowed values are `true` (interlaced video), `false` (progressive video), or one of the methods for producing interlaced video supported for the selected type of transcoder.\n", "allOf": [ { "$ref": "#/components/schemas/interlace_settings" } ] }, "rc_method": { "description": "A method for creating output video with constant bitrate suitable for broadcasting to television networks.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder-internals/#transcoder-rc_method" }, "allOf": [ { "$ref": "#/components/schemas/rc_method" } ] }, "temporal_tradeoff": { "description": "Drop some frames in dynamic scenes when the transcoder does not have time to code all frames.\nThe allowed values are:\n\n* `15` - drop 1 from 5\n* `13` - drop 1 from 3\n* `12` - drop 1 from 2\n* `23` - drop 2 from 3\n* `0` - do not drop frames\n", "type": "integer" }, "vbv_bufsize": { "description": "Virtual buffer size, it bits. The default value is `gop / fps * bitrate`.\n", "type": "integer" }, "resize_mode": { "description": "The mode to be used for resizing video tracks. It is one of the computing platforms for Flussonic Coder:\n\n* vic - Video Image Converter, specific for Jetson Nvidia\n* cuda - CUDA (or Compute Unified Device Architecture)\n", "allOf": [ { "$ref": "#/components/schemas/transcoder_resize_mode" } ] }, "burn": { "description": "Configuration of burn-in text, timestamp, or subtitles to video frames.", "x-api-allow": [ "smartcam", "iris-hal" ], "allOf": [ { "$ref": "#/components/schemas/tc_burn" } ] }, "extra": { "additionalProperties": { "type": "string" }, "type": "object", "description": "Some additional options." }, "counters": { "description": "Transcoder per encoder counters", "allOf": [ { "$ref": "#/components/schemas/tc_encoder_counters" } ] } } } ] }, "transcoder_track_info_audio_spec": { "type": "object", "properties": { "codec": { "description": "Audio codec (the AAC codec is used by default).", "anyOf": [ { "$ref": "#/components/schemas/frame_audio_codec" } ], "type": "string", "example": "opus", "default": "aac", "x-api-allow": [ "smartcam", "iris-hal" ] }, "language": { "description": "Language value of the track, if applicable.", "type": "string", "example": "eng" }, "input_track": { "description": "Input audio track to be transcoded.\n", "anyOf": [ { "type": "integer" } ], "example": 1 }, "volume": { "description": "Output audio volume. The value can be specified in decibels (dB) or it can be an integer/float (3, 0.5, etc.).\n\nIf it is just an integer or a float, the output audio volume is calculated by this formula:\n\n`output_volume = volume * input_volume`\n\nIf specified in decibels (dB), the output audio volume is calculated as follows:\n\n`output_volume = input_volume +/- volume`\n\ndepending whether it is a positive (+9dB) or a negative value (-6dB).\n\nBy default it equals to 1 (the input audio volume).\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/change-stream-volume/" }, "type": "string", "pattern": "^([0-9]+(\\.[0-9]+)?|(\\+|\\-)[0-9]+(\\.[0-9]+)?dB)$", "example": "-6dB" }, "split_channels": { "description": "This option allows to split each audio track with multiple channels into several mono tracks.\n", "type": "boolean", "default": false }, "counters": { "description": "Transcoder per encoder counters", "allOf": [ { "$ref": "#/components/schemas/tc_encoder_counters" } ] } } }, "tc_encoder_counters": { "type": "object", "properties": { "dup_frames_added": { "type": "integer", "description": "Duplicate frames generated by transcoder when input FPS falls below target" }, "overlimit_frames_removed": { "type": "integer", "description": "Frames discarded by transcoder to maintain target FPS" } } }, "webrtc_prefer_video_codec": { "type": "string", "enum": [ "h264", "av1" ] }, "webrtc_transport": { "type": "string", "oneOf": [ { "title": "UDP", "const": "udp" }, { "title": "TCP", "const": "tcp" } ] }, "output_audio": { "type": "string", "oneOf": [ { "title": "Keep", "const": "keep", "description": "Keep the original audio codec." }, { "title": "AAC", "const": "aac", "description": "Keep AAC if available or transcode to AAC; delete other tracks if any." }, { "title": "Add_AAC", "const": "add_aac", "description": "Add AAC if it was not available already while keeping the original track." } ] }, "h264_level": { "enum": [ "1", "1b", "1.1", "1.2", "1.3", "2", "2.1", "2.2", "3", "3.0", "3.1", "3.2", "4", "4.0", "4.1", "4.2", "5", "5.1", "5.2", "6", "6.1", "6.2" ], "type": "string" }, "hevc_level": { "enum": [ "1", "2", "2.1", "3", "3.1", "4", "4.1", "5", "5.1", "5.2", "6", "6.1", "6.2" ], "type": "string" }, "mp2v_level": { "oneOf": [ { "title": "Low", "const": "low" }, { "title": "Main", "const": "main" }, { "title": "High", "const": "high" }, { "title": "High 1440", "const": "high1440" } ], "type": "string" }, "transcoder_device": { "oneOf": [ { "title": "CPU", "const": "cpu" }, { "title": "Intel Quick Sync Video", "const": "qsv" }, { "title": "Nvidia NVENC (encoder only)", "const": "nvenc" }, { "title": "Nvidia NVENC 2", "const": "nvenc2", "x-private": true }, { "title": "Flussonic Coder", "const": "coder" }, { "title": "Raspberry Pi OMX", "const": "omx", "x-private": true }, { "title": "L4T2", "const": "l4t2", "x-private": true } ], "type": "string" }, "transcoder_resize_mode": { "oneOf": [ { "title": "VIC", "const": "vic" }, { "title": "CUDA", "const": "cuda" } ], "type": "string" }, "tc_bitrate": { "anyOf": [ { "oneOf": [ { "title": "Copy the input track as-is without any processing.", "const": "copy", "type": "string" } ] }, { "type": "integer" } ] }, "tc_deviceid": { "anyOf": [ { "oneOf": [ { "title": "Auto", "const": "auto", "type": "string" } ] }, { "type": "integer" } ] }, "tc_fps": { "anyOf": [ { "oneOf": [ { "title": "Any", "const": "any", "type": "string" }, { "title": "Auto", "const": "auto", "type": "string" } ] }, { "type": "integer" } ] }, "tc_preset": { "oneOf": [ { "title": "Slow", "const": "slow" }, { "title": "Medium", "const": "medium" }, { "title": "Fast", "const": "fast" }, { "title": "Very fast", "const": "veryfast" }, { "title": "Ultra fast", "const": "ultrafast" } ], "type": "string" }, "tc_profile": { "oneOf": [ { "title": "Simple", "const": "simple" }, { "title": "Baseline", "const": "baseline" }, { "title": "Main", "const": "main" }, { "title": "Main 10", "const": "main10" }, { "title": "High", "const": "high" }, { "title": "High 10", "const": "high10" }, { "title": "High 422", "const": "high422" }, { "title": "High 444", "const": "high444" } ], "type": "string" }, "rc_method": { "oneOf": [ { "title": "VBR", "const": "vbr", "description": "The encoder will not encode a stream to be DVB compliant.\nHowever, VBR (Variable Bit Rate) produces better compression compared to CBR at about the same quality.\n" }, { "title": "CBR", "const": "cbr", "description": "The encoder will produce a DVB compliant stream.\nThe bitrate of the output stream will be stable to fit the fixed-bandwidth channel. \nIn other words, we guarantee that the bitrate has a certain upper bound in a sliding window.\n" }, { "title": "CBR2pass", "const": "cbr2pass", "description": "The encoder will encode the video once in CBR, \nthen it will encode it second time in CBR but using the information from the previous pass to improve quality.\n" } ], "type": "string" }, "tc_label_box": { "type": "object", "properties": { "borderw": { "description": "The width, in pixel, of the border to be drawn around the text, timestamp, or subtitles.", "type": "integer", "example": 10 }, "color": { "description": "Box color.", "anyOf": [ { "type": "string", "examples": { "default": { "value": "black" }, "mylive/bunny": { "value": "white" } } }, { "type": "string", "format": "hexcolor", "example": "#d62d20", "x-format-description": "hexcolor" } ] }, "alpha": { "description": "Box opacity (use values from 0.1 to 1.0, 0.0 - completely transparent, 1.0 - completely opaque).", "maximum": 1, "minimum": 0, "type": "number", "example": 0 } } }, "tc_label_font": { "type": "object", "properties": { "file": { "description": "The subpath to the `.ttf` font file in the `font` subdirectory of the `/etc/flussonic/` directory. \nThis means you can place the font file like `/etc/flussonic/font/SomeFont.ttf`.\n\nIf the font file specified is missing in `/etc/flussonic/font/`, the default `FiraCode-Regular.ttf` font will be used, which is included in Flussonic.\n\nYou can also specify the full path to a font file. Make sure you put the font file in the directory you specified. \n", "type": "string", "example": "/usr/share/fonts/truetype/freefont/FONT_NAME.ttf" }, "size": { "description": "The font size in pixel.", "type": "integer", "example": 24 }, "color": { "description": "Font color.", "anyOf": [ { "type": "string", "examples": { "default": { "value": "black" }, "mylive/bunny": { "value": "white" } } }, { "type": "string", "format": "hexcolor", "example": "#d62d20", "x-format-description": "hexcolor" } ] }, "alpha": { "description": "Font opacity (use values from 0.1 to 1.0, 0.0 - completely transparent, 1.0 - completely opaque).", "maximum": 1, "minimum": 0, "type": "number", "example": 1 } } }, "tc_label": { "type": "object", "properties": { "text": { "description": "Text, time, or subtitles to burn-in to video frames.\n\n* For text - it is the text\n* For subtitles - it is the subtitles track, e.g., `t1`.\n* For time - it is the time in one of the formats:\n** `%T` - the time in 24-hour notation (`%H:%M:%S`).\n** `%F` - equivalent to `%Y-%m-%d` (the ISO 8601 date format).\n", "type": "string" }, "x": { "description": "The offset of the text, timestamp, or subtitles position, in pixel, to the right or left to the center of the screen.", "type": "integer" }, "y": { "description": "The offset of the text, timestamp, or subtitles position, in pixel, up or down to the center of the screen.", "type": "integer" }, "position": { "description": "Position to burn text.", "oneOf": [ { "title": "Top left", "const": "tl" }, { "title": "Bottom left", "const": "bl" }, { "title": "Top right", "const": "tr" }, { "title": "Bottom right", "const": "br" }, { "title": "Center", "const": "c" }, { "title": "Center top", "const": "ct" }, { "title": "Center bottom", "const": "cb" } ], "type": "string" }, "font": { "description": "Font to be used for text, timestamp, or subtitles burn-in to video frames.", "allOf": [ { "$ref": "#/components/schemas/tc_label_font" } ] }, "box": { "description": "Parameters of the box around the text, timestamp, or subtitles.", "allOf": [ { "$ref": "#/components/schemas/tc_label_box" } ] } } }, "tc_burn": { "type": "object", "properties": { "text": { "description": "Configuration of burn-in text to video frames. \n", "allOf": [ { "$ref": "#/components/schemas/tc_label" } ] }, "time": { "description": "Configuration of burn-in timestamp to video frames. \n", "allOf": [ { "$ref": "#/components/schemas/tc_label" } ] } } }, "tc_global": { "type": "object", "properties": { "target": { "description": "What use is the stream intended for.\nSpecifying this option applies useful defaults in conformance with standards\n", "type": "string", "oneOf": [ { "title": "UHD television", "const": "uhdtv", "description": "The resulting stream is ready to be transmitted over Ultra HD television networks\ni.e. H.264 4K (2160p) with higher bitrate, BT.2020 colors\nand AAC audio\n" }, { "title": "HD television", "const": "hdtv", "description": "The resulting stream is ready to be transmitted over HD television networks,\ni.e. H.264 1080p with BT.709 colors\nand AAC audio\n" }, { "title": "SD television (PAL)", "const": "sdtv_pal", "description": "The resulting stream is ready to be transmitted over older European (PAL) television networks,\ni.e. H.264 576i video with lower bitrate, BT.470 colors, 16:11 SAR\nand AAC audio\n" }, { "title": "SD television (NTSC)", "const": "sdtv_ntsc", "description": "The resulting stream is ready to be transmitted over older American (NTSC) television networks,\ni.e. H.264 480i video with lower bitrate, SMPTE 170M colors, 40:33 SAR\nand AC-3 audio\n" } ] }, "hw": { "description": "Transcoder hardware device type to be used for transcoding a stream.", "allOf": [ { "$ref": "#/components/schemas/transcoder_device" } ] }, "deviceid": { "description": "Identifier of hardware device to be used for transcoding a stream.", "allOf": [ { "$ref": "#/components/schemas/tc_deviceid" } ] }, "external": { "description": "If this parameter is se to `true` (by default), the transcoder runs in a separate process from Flussonic.\n\nIf it is set to `false`, the transcoder will run in the same process as Flussonic. \nThis mode speeds up encoding, especially when encoding audio or when using an Nvidia device. \nHowever, a transcoder error may cause Flussonic to crash.\n", "type": "boolean" }, "keep_ts": { "x-private": true, "description": "Do not bind frames timestamps to realtime before transcoding (disables timestamps being monotonic even if source switches/restarts).", "type": "boolean" }, "fps": { "description": "FPS (frames per second) value to be applied for any video track in the stream. May be overridden for a track.", "x-private": true, "allOf": [ { "$ref": "#/components/schemas/tc_fps" } ], "example": 24 }, "gop": { "description": "GOP (group of pictures) size (in frames) to be applied for any video track in the stream.\nThe encoder will create all GOPs of an exactly identical size - as specified in this option.\nMay be overridden for a track. \n", "type": "integer", "example": 150 }, "resize_mode": { "description": "The mode to be used for resizing video tracks. It is one of the computing platforms for Flussonic Coder:\n\n* vic - Video Image Converter, specific for Jetson Nvidia\n* cuda - CUDA (or Compute Unified Device Architecture)\n", "x-private": true, "allOf": [ { "$ref": "#/components/schemas/transcoder_resize_mode" } ] }, "burn": { "description": "Configuration of text, timestamp or subtitles burn-in to video frames. \n", "allOf": [ { "$ref": "#/components/schemas/tc_burn" } ] } } }, "tc_crop": { "type": "object", "properties": { "left": { "description": "The `x` coordinate of the upper-left corner of the output video withing the input video.", "type": "integer" }, "top": { "description": "The `y` coordinate of the upper-left corner of the output video withing the input video.", "type": "integer" }, "width": { "description": "The width of the output video.", "type": "integer" }, "height": { "description": "The height of the output video.", "type": "integer" } }, "required": [ "left", "top", "width", "height" ] }, "deinterlace_settings": { "anyOf": [ { "oneOf": [ { "title": "Enabled", "const": true, "description": "Deinterlacing enabled." }, { "title": "Disabled", "const": false, "description": "Deinterlacing disabled." } ], "type": "boolean" }, { "oneOf": [ { "title": "Adaptive", "const": "adaptive", "description": "Use adaptive deinterlacing method." }, { "title": "CUDA yadif", "const": "yadif", "description": "Use CUDA yadif deinterlacing method." } ], "type": "string" } ] }, "tc_decoder": { "type": "object", "properties": { "pix_fmt": { "description": "The required pixel format according to a color model.", "allOf": [ { "$ref": "#/components/schemas/frame_video_pix_fmt" } ] }, "deinterlace": { "description": "Activate deinterlacing, i.e., converting an interlaced image to a progressive image. \nIt is necessary for comfortable viewing of legacy TV video on PC/mobile devices.\n", "allOf": [ { "$ref": "#/components/schemas/deinterlace_settings" } ], "example": true }, "deinterlace_rate": { "description": "This parameter is used when encoding with Nvidia NVENC.\nYou can remove duplicate frames that were produced after deinterlacing, preventing increased bitrate, by one of two methods.\n", "oneOf": [ { "title": "Frame", "const": "frame", "description": "From field sequence `1a 1b 2a 2b 3a 3b` we get frame sequence `1a1b 2a2b 3a3b`. \nThe FPS stays the same.\n" }, { "title": "Field", "const": "field", "description": "Fields `1a 1b 2a 2b 3a 3b` transform into `1a1b 1b2a 2a2b 2b3a` frames. \nThe FPS increases two times after transcoding.\n" } ], "type": "string", "example": "frame" }, "crop": { "description": "Video cropping options.", "allOf": [ { "$ref": "#/components/schemas/tc_crop" } ] }, "drop_frame_interval": { "description": "This parameter is applicable for NVIDIA Jetson transcoder only.\nThis is the number of frames after wich the decoder skips a frame and, thus, allows to save the resources. For example:\n\n* 1 - sip each frame\n* 2 - skip each second frame\n* 3 - skip each third frame, etc.\n\nThis option can be useful for streams with high FPS (e.g., 60) as it allows to increase the bandwidth.\n", "maximum": 1000, "minimum": 1, "type": "integer", "example": 3 }, "no_dpb": { "description": "Switch off the decoded picture buffer. Works for the streams with 1 reference frame.\nThe default value is `false`.\n", "type": "boolean", "example": false }, "streaming_frame": { "description": "Allow receiving incomplete frames from the input buffer.\nIf it is set to `true`, the decoder can start decoding before the complete frame is received.\n", "type": "boolean", "example": false } } }, "tc_audio_opts": { "oneOf": [ { "$ref": "#/components/schemas/tc_audio_aac" }, { "$ref": "#/components/schemas/tc_audio_opus" }, { "$ref": "#/components/schemas/tc_audio_mp2a" }, { "$ref": "#/components/schemas/tc_audio_mp3" }, { "$ref": "#/components/schemas/tc_audio_ac3" }, { "$ref": "#/components/schemas/tc_audio_pcma" }, { "$ref": "#/components/schemas/tc_audio_pcm" } ], "discriminator": { "propertyName": "codec", "mapping": { "aac": "#/components/schemas/tc_audio_aac", "opus": "#/components/schemas/tc_audio_opus", "mp2a": "#/components/schemas/tc_audio_mp2a", "mp3": "#/components/schemas/tc_audio_mp3", "ac3": "#/components/schemas/tc_audio_ac3", "eac3": "#/components/schemas/tc_audio_ac3", "pcmu": "#/components/schemas/tc_audio_pcma", "pcma": "#/components/schemas/tc_audio_pcma", "pcm": "#/components/schemas/tc_audio_pcm" } }, "x-record-definition": "#/components/schemas/tc_audio_all" }, "tc_audio_all": { "allOf": [ { "$ref": "#/components/schemas/tc_audio_aac" } ] }, "tc_audio_base": { "type": "object", "properties": { "codec": { "description": "Audio codec (the AAC codec is used by default).", "anyOf": [ { "$ref": "#/components/schemas/frame_audio_codec" }, { "$ref": "#/components/schemas/frame_audio_raw_codec" } ], "type": "string", "example": "opus", "default": "aac" }, "bitrate": { "description": "Audio bitrate. The allowed values are:\n\n* `copy` - the bitrate or the original stream is copied to the outgoing stream.\n* a full number of bits (e.g., 64000) or a short form of the number with `k` (e.g., 64k).\n", "allOf": [ { "$ref": "#/components/schemas/tc_bitrate" } ], "example": 64000 }, "avol": { "description": "Output audio volume. The value can be specified in decibels (dB) or it can be an integer/float (3, 0.5, etc.).\n\nIf it is just an integer or a float, the output audio volume is calculated by this formula:\n\n`output_volume = avol * input_volume`\n\nIf specified in decibels (dB), the output audio volume is calculated as follows:\n\n`output_volume = input_volume +/- avol`\n\ndepending whether it is a positive (+9dB) or a negative value (-6dB).\n\nBy default it equals to 1 (the input audio volume).\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/change-stream-volume/" }, "type": "string", "pattern": "^([0-9]+(\\.[0-9]+)?|(\\+|\\-)[0-9]+(\\.[0-9]+)?dB)$", "example": "-6dB" }, "split_channels": { "description": "This option allows to split each audio track with multiple channels into several mono tracks.\n", "type": "boolean", "default": false } } }, "tc_audio_aac": { "allOf": [ { "$ref": "#/components/schemas/tc_audio_base" }, { "$ref": "#/components/schemas/ti_audio_aac_spec" } ] }, "tc_audio_opus": { "allOf": [ { "$ref": "#/components/schemas/tc_audio_base" }, { "$ref": "#/components/schemas/ti_audio_opus_spec" } ] }, "tc_audio_ac3": { "allOf": [ { "$ref": "#/components/schemas/tc_audio_base" }, { "$ref": "#/components/schemas/ti_audio_ac3_spec" } ] }, "tc_audio_pcma": { "allOf": [ { "$ref": "#/components/schemas/tc_audio_base" }, { "$ref": "#/components/schemas/ti_audio_pcma_spec" } ] }, "tc_audio_pcm": { "allOf": [ { "$ref": "#/components/schemas/tc_audio_base" }, { "$ref": "#/components/schemas/ti_audio_pcm_spec" } ] }, "tc_audio_mp3": { "allOf": [ { "$ref": "#/components/schemas/tc_audio_base" }, { "$ref": "#/components/schemas/ti_audio_mp3_spec" } ] }, "tc_audio_mp2a": { "allOf": [ { "$ref": "#/components/schemas/tc_audio_base" }, { "$ref": "#/components/schemas/ti_audio_mp2a_spec" } ] }, "tc_size": { "type": "object", "properties": { "width": { "description": "The picture width in pixels on the display where it will be played by a player.\nIf you need to insert a web-player into a web page, use this value for choosing the player size.\nA value of -1 means that the heigth will be used to calculate the actual width with maintaining the aspect ratio.\nOnly one of width or height may have value -1.\nZero value (0) is not allowed.\n", "type": "integer", "minimum": -1 }, "height": { "description": "The picture height in pixels on the display where it will be played by a player.\nIf you need to insert a web-player into a web page, use this value for choosing the player size.\nA value of -1 means that the width will be used to calculate the actual height with maintaining the aspect ratio.\nOnly one of width or height may have value -1.\nZero value (0) is not allowed.\n", "type": "integer", "minimum": -1 }, "strategy": { "description": "The algorithm of the picture resizing: crop, scale, or fit.\n", "default": "fit", "oneOf": [ { "title": "Crop", "const": "crop" }, { "title": "Scale", "const": "scale" }, { "title": "Fit", "const": "fit" } ], "type": "string", "example": "crop" }, "background": { "description": "The color of the area in the player that is not occupied by the video after resizing. \nIt is used only with the 'fit' strategy.\n", "anyOf": [ { "oneOf": [ { "title": "Blur", "const": "blur" } ], "type": "string" }, { "type": "string", "format": "hexcolor", "x-format-description": "hexcolor" } ] } } }, "tc_sar": { "type": "object", "properties": { "x": { "description": "The first number in SAR. SAR is the ratio of the width of the display video representation to the width of the pixel representation.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder-internals/#transcoder-options_sar" }, "type": "integer" }, "y": { "description": "The second number in SAR. SAR is the ratio of the width of the display video representation to the width of the pixel representation.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder-internals/#transcoder-options_sar" }, "type": "integer" } } }, "tc_logo": { "type": "object", "properties": { "path": { "description": "Path to the logo image.", "type": "string", "pattern": "^.+\\.png$", "example": "@chan.png" }, "x": { "description": "The offset of the logo position, in pixel, to the right or left to the center of the screen.", "type": "integer", "example": 10 }, "y": { "description": "The offset of the logo position, in pixel, up or down to the center of the screen.", "type": "integer", "example": 10 }, "position": { "description": "Position to place the logo.\n", "oneOf": [ { "title": "Top left", "const": "tl" }, { "title": "Top right", "const": "tr" }, { "title": "Bottom left", "const": "bl" }, { "title": "Bottom right", "const": "br" }, { "title": "Center", "const": "c" } ], "type": "string", "example": "tl" } } }, "tc_qp_range": { "type": "object", "properties": { "qpmini": { "description": "Minimal quantization parameter for I-frames.", "maximum": 100, "minimum": 0, "type": "integer" }, "qpmaxi": { "description": "Maximal quantization parameter for I-frames.", "maximum": 100, "minimum": 0, "type": "integer" }, "qpminp": { "description": "Minimal quantization parameter for P-frames.", "maximum": 100, "minimum": 0, "type": "integer" }, "qpmaxp": { "description": "Maximal quantization parameter for P-frames.", "maximum": 100, "minimum": 0, "type": "integer" }, "qpminb": { "description": "Minimal quantization parameter for B-frames.", "maximum": 100, "minimum": 0, "type": "integer" }, "qpmaxb": { "description": "Maximal quantization parameter for B-frames.", "maximum": 100, "minimum": 0, "type": "integer" } } }, "interlace_settings": { "anyOf": [ { "oneOf": [ { "title": "Top field first", "const": "tff", "description": "Top field first. This method is used with hw=qsv, nvenc." }, { "title": "Bottom field first", "const": "bff", "description": "Bottom field first. This method is used with hw=qsv, nvenc." }, { "title": "Top field first separated", "const": "tff_separated", "description": "Top field first, separated fields. This method is used with hw=qsv." }, { "title": "Bottom field first separated", "const": "bff_separated", "description": "Bottom field first, separated fields. This method is used with hw=qsv." }, { "title": "MBAFF", "const": "mbaff", "description": "Interlaced libx264 MBAFF method. This method is used only with hw=cpu." } ], "type": "string" }, { "type": "boolean", "description": "Enables encoding into interlaced video by using the default method for the encoder specified\n(`mbaff` is the default method for `hw=cpu`, `tff` is the default method for `hw=qsv`, `hw=nvenc`)\n" } ] }, "tc_video_opts": { "type": "object", "required": [ "track" ], "properties": { "track": { "description": "Number of a video track.", "type": "integer", "example": 1 }, "bitrate": { "description": "The bitrate of a video track.", "allOf": [ { "$ref": "#/components/schemas/tc_bitrate" } ], "example": 1000000 }, "codec": { "description": "The video codec.", "default": "h264", "oneOf": [ { "title": "H.264", "const": "h264" }, { "title": "HEVC", "const": "hevc" }, { "title": "AV1", "const": "av1" }, { "title": "MP2V", "const": "mp2v" } ], "type": "string" }, "preset": { "description": "A set of values that determine a certain encoding speed, which influences a compression ratio. \nA slower preset will provide better compression (compression is quality per file size).\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder-internals/#transcoder-options_preset" }, "default": "veryfast", "allOf": [ { "$ref": "#/components/schemas/tc_preset" } ], "example": "medium" }, "profile": { "description": "Limits the output to a specific H.264 profile.", "allOf": [ { "$ref": "#/components/schemas/tc_profile" } ] }, "level": { "description": "A set of constraints that indicate a degree of required decoder performance.\nThis parameter is used for compatibility with old devices.\n", "anyOf": [ { "allOf": [ { "$ref": "#/components/schemas/h264_level" } ] }, { "allOf": [ { "$ref": "#/components/schemas/hevc_level" } ] }, { "allOf": [ { "$ref": "#/components/schemas/mp2v_level" } ] } ] }, "size": { "description": "Target size of the image and the strategy to achieve it.", "allOf": [ { "$ref": "#/components/schemas/tc_size" } ] }, "sar": { "description": "Target aspect ratio.", "allOf": [ { "$ref": "#/components/schemas/tc_sar" } ] }, "logo": { "description": "The configuration of a logo \"burned\" into the video track.\nThe transcoder adds the logo before the video is resized as specified in the `size` option. \nThis means that the logo can be visibly stretched if the size was changed significantly.\n", "allOf": [ { "$ref": "#/components/schemas/tc_logo" } ] }, "alogo": { "description": "The configuration of a logo added to the video track after the video was resized as specified in the `size` option.\n\nThis prevents the logo picture from stretching that might occur when the `logo` option is used. \nYou will need to prepare and specify a separate file with a logo for each size of the resulting video track.\n", "allOf": [ { "$ref": "#/components/schemas/tc_logo" } ] }, "fps": { "description": "Frame rate (frames per second) - the speed at which a sequence of images is displayed on a screen.\n\nHigher frame rates capture more images per second, which makes for smoother video.\nThe standard frame rate for color television in the Phase Alternating Line (PAL) format is 25 fps.\nThe standard frame rate for color television in the National Television System Committee (NTSC) format is 29,97 fps\n(a little bit lower than the original frame rate of black and white NTSC television, equal to 30 fps.)\nIf interlaced TV is used, two fields of each frame (with odd-numbered lines and with even-numbered lines) are displayed consequently,\nbut the frame rate is actually not doubled (50 half-frames are still equal to 25 original frames).\n", "allOf": [ { "$ref": "#/components/schemas/tc_fps" } ] }, "bframes": { "description": "Number of B-frames in a GOP. B-frames contain links to keyframes and P-frames before and after themselves.\nB-frames help to compress the video. However, some players impose limitations on this number: usually no more than 2 B-frames are used.\nThis value also defines the GOP structure - the repeated pattern of frames after the keyframe: P, BP, BBP, BBBP, or BBBBP.\n\nWhen set to 0, this option disables b-frames. This may be necessary, for example, when broadcasting to RTSP.\n", "enum": [ 0, 1, 2, 3, 4 ], "type": "integer", "example": 3 }, "refs": { "description": "The number of reference frames in a GOP.\nReference frames are frames of a compressed video that are used to define other frames (P-frames and B-frames).\n", "maximum": 6, "minimum": 1, "type": "integer" }, "gop": { "description": "Sets the number of frames in a GOP. \nThe encoder will create all GOPs of an exactly identical size - as specified in this option.\n", "type": "integer", "example": 150 }, "qp_range": { "description": "The ranges of the quantization parameter for different types of frames in a GOP.\n\nQuantization is an algorithm used for video compression. It is based on fragmentation of video frames.\nIncreasing this parameter allows to improve the compression but may lower the picture quality.\nUsually, these ranges are defined automatically by the transcoder, but for some types of transcoders it makes sense to set them manually.\n", "allOf": [ { "$ref": "#/components/schemas/tc_qp_range" } ] }, "threads": { "description": "Number of threads used by the encoder when transcoding with CPU (it is not used for other types of transcoder).\nThis parameter allows to increase performance by adding new threads. By default, it is autodetected.\n", "type": "integer" }, "open_gop": { "description": "Whether open GOP is used. Open GOP contains P-frames that refer to the frames before the keyframe.\nIt allows to decrease bitrate to 5-7%, but can result in breaking the picture.\n", "default": false, "type": "boolean" }, "interlace": { "description": "This parameter is used to get an interlaced stream from a progressive one.\nThe allowed values are `true` (interlaced video), `false` (progressive video), or one of the methods for producing interlaced video supported for the selected type of transcoder.\n", "allOf": [ { "$ref": "#/components/schemas/interlace_settings" } ] }, "rc_method": { "description": "A method for creating output video with constant bitrate suitable for broadcasting to television networks.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder-internals/#transcoder-rc_method" }, "allOf": [ { "$ref": "#/components/schemas/rc_method" } ] }, "temporal_tradeoff": { "description": "Drop some frames in dynamic scenes when the transcoder does not have time to code all frames.\nThe allowed values are: \n\n* `15` - drop 1 from 5\n* `13` - drop 1 from 3\n* `12` - drop 1 from 2\n* `23` - drop 2 from 3\n* `0` - do not drop frames\n", "type": "integer" }, "vbv_bufsize": { "description": "Virtual buffer size, it bits. The default value is `gop / fps * bitrate`.\n", "type": "integer" }, "resize_mode": { "description": "The mode to be used for resizing video tracks. It is one of the computing platforms for Flussonic Coder:\n\n* vic - Video Image Converter, specific for Jetson Nvidia\n* cuda - CUDA (or Compute Unified Device Architecture)\n", "allOf": [ { "$ref": "#/components/schemas/transcoder_resize_mode" } ] }, "burn": { "description": "Configuration of burn-in text, timestamp, or subtitles to video frames.", "allOf": [ { "$ref": "#/components/schemas/tc_burn" } ] }, "extra": { "additionalProperties": { "type": "string" }, "type": "object", "description": "Some additional options." } } }, "transcoder_opts": { "type": "object", "properties": { "global": { "description": "Transcoder settings to be used for transcoding. \n", "allOf": [ { "$ref": "#/components/schemas/tc_global" } ], "x-default": { "$ref": "#/components/schemas/tc_global" } }, "decoder": { "description": "Decoder settings to be used for transcoding. \n", "allOf": [ { "$ref": "#/components/schemas/tc_decoder" } ], "x-default": { "$ref": "#/components/schemas/tc_decoder" } }, "audio": { "description": "List of audio tracks the stream audio track will be transcoded into.\n", "allOf": [ { "$ref": "#/components/schemas/tc_audio_opts" } ], "x-default": { "$ref": "#/components/schemas/tc_audio_opts" }, "deprecated": true, "x-delete-at": 23.09 }, "video": { "description": "List of video tracks the stream video track will be transcoded into. \n", "items": { "allOf": [ { "$ref": "#/components/schemas/tc_video_opts" } ] }, "type": "array", "default": [], "deprecated": true, "x-delete-at": 24.08 }, "tracks": { "description": "Info on the tracks. In the case of iris-hal, the first video track must be\nhigh-resolution track, second one (if present) must be lower-resolution\n", "items": { "allOf": [ { "$ref": "#/components/schemas/transcoder_track_info" } ] }, "type": "array", "default": [], "x-api-allow": [ "smartcam", "iris-hal" ] } } }, "dvr_schedule_range": { "items": { "type": "integer" }, "type": "array" }, "dvr_range": { "type": "object", "properties": { "from": { "description": "The beginning of the recorded DVR range. Use opened_at instead.\nMention that opened_at has milliseconds\n", "type": "integer", "format": "utc", "example": 1525186456, "deprecated": true, "x-delete-at": 24.11, "x-format-description": "Unix timestamp in seconds", "minimum": 1000000000, "maximum": 10000000000 }, "duration": { "description": "The duration of the recorded DVR range.\nUse closed_at instead of this field. \n", "type": "integer", "format": "seconds", "example": 28800, "deprecated": true, "x-delete-at": 24.11, "x-format-description": "seconds" }, "opened_at": { "type": "integer", "format": "utc_ms", "description": "The time when this range was started. Naming is standard for whole flussonic ecosystem.\n\nIs a replacement for field `from`\n", "examples": { "default": { "value": 1637094994000 } }, "x-format-description": "Unix timestamp in milliseconds", "minimum": 1000000000000, "maximum": 10000000000000 }, "closed_at": { "type": "integer", "format": "utc_ms", "description": "The the of the last recorded data.\n\nPlease notice that closed_at could be changed. There are two reasons.\n- Cleaner process reduced data. Read [more](https://flussonic.com/doc/api/reference/#tag/stream/operation/stream_get/response%7Cdvr%7Cepisodes_url).\n- Recording is still working. Near real-time value means that DVR is active at the moment.\n\nThis is a replacement for `duration` field\n", "examples": { "default": { "value": 1637094994000 } }, "x-format-description": "Unix timestamp in milliseconds", "minimum": 1000000000000, "maximum": 10000000000000 } } }, "dvr_base_config": { "type": "object", "properties": { "storage_limit": { "description": "Maximum disk consumption in bytes. When this limit is reached, \nthe oldest segment of the recording will be overridden by later data.\n\nThis option affects both continuous recording and locked episodes (see `episodes_url`).\n\nIf `episodes_url` does not respond, the archive clean-up by `storage_limit` is not performed\nto avoid deleting the recordings that should not be deleted.\n", "type": "integer", "format": "bytes", "example": 400000000000, "x-format-description": "bytes" }, "expiration": { "description": "Archive depth - a period (in seconds) back from the current moment during which the \ncontigious part of archive is stored. \nAs time goes, the parts of the recording which are older than the archive depth are deleted.\n\nIf you have option `episodes_expiration` enabled, then some parts of DVR that are \nlocked by episode signalling mechanism may be kept more than this `expiration` depth.\n\nIf `episodes_url` does not respond, the archive clean-up by `expiration` is not performed;\nonly the archive with expired episodes (`episodes_expiration`) is cleaned up until the `episodes_url` restores.\n", "type": "integer", "format": "seconds", "examples": { "default": { "value": 604800 } }, "x-api-allow": [ "central-layouter" ], "x-format-description": "seconds" }, "episodes_expiration": { "description": "Additional archive depth in seconds for episodes. If set, episodes and their corresponding DVR record\nwill be saved for `expiration + episodes_expiration` seconds.\n\nThe archive clean-up within `[expiration, expiration+episodes_expiration]` seconds of the\nrecording performed depending on [external_episodes_list](https://flussonic.com/doc/api/config-external/#tag/dvr/operation/external_episodes_list)\nresponse of `episodes_url`.\n\nAnything older than `expiration+episodes_expiration` seconds will\nbe cleaned even if `episodes_url` does not respond.\n", "type": "integer", "format": "seconds", "examples": { "default": { "value": 6048000 } }, "x-format-description": "seconds" }, "episodes_url": { "description": "External URL that will be triggered for fetching episodes list.\nTake a look at config_external API for method `external_episodes_list`.\n\nIf the `episodes_url` not set or responds with any HTTP code other than `200` or `501` or does not respond within a timeout,\nthe archive clean-up is only performed by `episodes_expiration` time while `expiration` and `storage_limit` are ignored until the `episodes_url` restores.\n\n\nIf `episodes_url` is a blank string, then current config_external API endpoint will be used to request episodes.\nIt is most common usage of this field.\n", "type": "string", "examples": { "default": { "value": "http://central-host.local/config-external/episodes" }, "simple": { "value": "" } } }, "dvr_replicate": { "description": "Whether DVR replication is used. Replication means that a DVR archive is stored on two (or more) Flussonic servers.\nIt can be used for reliability or for broadcasting with a time shift. Learn more in [Flussonic documentation](https://flussonic.com/doc/scale-dvr-playback-with-new-server/).\n", "type": "boolean", "example": true }, "replication_speed": { "x-private": true, "description": "Replication speed limitation. \nThe secondary server can limit the total speed of replication in order not to interrupt the live broadcast or reduce its quality. \n", "type": "integer", "format": "speed", "x-format-description": "speed" }, "replication_port": { "description": "Replication port. \nBy default, replication is enabled on the port specified when configuring the M4F source. \nYou can specify a separate port for replication.\n", "allOf": [ { "$ref": "#/components/schemas/network_port" } ], "example": 8002 }, "schedule": { "description": "One or several time intervals for recording by schedule.\nThe beginning and the end of each interval are set in \"hhmm\" format (without leading zeros ) according to UTC standard. \nFor example, `2330` is for 23:30, `800` - for 08:00. The interval can go over midnight, e.g. 22:00-1:30. In this case it is set as follows: `[2200,130]`.\n\nScheduled recording can be useful for the channels with part-time broadcasting. \nIt allows to save disk space significantly.\n", "items": { "allOf": [ { "$ref": "#/components/schemas/dvr_schedule_range" } ] }, "type": "array", "example": [ [ 800, 1600 ], [ 2200, 130 ] ] }, "dvr_offline": { "description": "If this option is enabled, Flussonic detects DVR at the start of the stream,\nbut does not start recording immediately and waits for external API request.\n", "type": "boolean" }, "copy": { "description": "The URL of another storage to copy the blobs (hours of the archive) into. \nCopying is done when a blob is complete (i.e., once an hour), and therefore helps significantly reduce the number of network requests to a cloud storage.\n", "type": "string", "format": "dvr_url", "example": "s3://token@minio.mycompany.com/dvr-bucket", "x-format-description": "dvr_url" } } }, "debug_stream_spec": { "type": "object", "properties": { "ips": { "description": "Client IP addresses which data is recorded.", "type": "array", "items": { "allOf": [ { "$ref": "#/components/schemas/network_addr" } ] }, "example": [ "10.10.10.9" ] }, "tracepoints": { "description": "Points in stream pipeline where data is recorded.\n", "type": "array", "items": { "$ref": "#/components/schemas/debug_stream_tracepoints" }, "default": [ "input" ], "example": [ "input", "stream", "webrtc_play_network" ] }, "root": { "description": "The path to the directory where the session data will be recorded.\nRecommended for debugging needs **only**.\n", "type": "string", "format": "dvr_url", "example": "/tmp/debug", "x-format-description": "dvr_url" }, "storage_limit": { "description": "Maximum disk consumption in bytes. When this limit is reached, \nthe oldest segment of the recording will be overridden by later data.\n", "type": "integer", "format": "bytes", "example": 400000000000, "x-format-description": "bytes" }, "expiration": { "description": "A period (in seconds) back from the current moment during which the files are stored.\nAs time goes, the files which are older than this period are being overridden by the later files.\n", "type": "integer", "format": "seconds", "example": 604800, "x-format-description": "seconds" }, "duration": { "description": "Time for recording in milliseconds.", "type": "integer", "format": "milliseconds", "example": 6000, "x-format-description": "milliseconds" } }, "required": [ "root" ] }, "debug_stream_tracepoints": { "anyOf": [ { "oneOf": [ { "title": "Input", "const": "input", "description": "Record raw bytes at stream input. Not all protocols support this." }, { "title": "Stream", "const": "stream", "description": "Record frames after all pre-processing, as would be output via push or play." }, { "title": "WEBRTC network output data", "const": "webrtc_play_network", "description": "Record webrtc play session rtp packets" }, { "title": "WEBRTC frames before encoding", "const": "webrtc_play_frame", "description": "Record webrtc play session frames" }, { "title": "RTSP network output data", "const": "rtsp_play_network", "description": "Record rtsp play session packets" } ], "type": "string" } ] }, "motion_detector_spec": { "type": "object", "properties": { "enabled": { "description": "This parameter allows Flussonic to receive motion detection events from cameras via ONVIF protocol. \nFlussonic adds corresponding marks in the archive recordings in the places when motion was detected. \n", "default": true, "type": "boolean", "example": true }, "pull": { "description": "The address from which Flussonic will get motion detection events.\nThe events are taken from the ONVIF pull point provided by the cameras.\n\nFor ONVIF-compatible cameras, the format of the address is:\n`onvif://{login}:{password}@{address}:{port}/onvif/device_service`\n\nFor SmartCam cameras, the format of the address is:\n`http+iris://{login}:{password}@{address}/smartcam/api/v3`\n\nFor Iris cameras, the format of the address is:\n`http+iris://{login}:{password}@{address}/iris/api/v2`\n", "externalDocs": { "description": "Find more information here", "url": "http://www.onvif.org/specs/core/ONVIF-Core-Specification.pdf" }, "type": "string", "examples": { "default": { "value": "onvif://admin:admin@127.0.0.1:80" }, "iris": { "value": "http+iris://localhost" }, "smartcam": { "value": "http+iris://admin:admin@10.77.1.130/smartcam/api/v3" } } } } }, "cache_spec": { "type": "object", "properties": { "reference": { "description": "The name of the cache.", "type": "string", "format": "cache_name", "example": "cache1", "x-format-description": "cache_name" }, "misses": { "description": "The number of requests necessary for a file to be cached.", "type": "integer", "example": 3 }, "storage_limit": { "description": "Maximum disk consumption in bytes. \nWhen this limit is reached, the oldest files will be overridden by later files.\n", "type": "integer", "format": "bytes", "example": 400000, "x-format-description": "bytes" }, "expiration": { "description": "A period (in seconds) back from the current moment during which the files are stored.\nAs time goes, the files which are older than this period are being overridden by the later files.\n", "type": "integer", "format": "seconds", "example": 3600, "x-format-description": "seconds" } } }, "vbi_line": { "anyOf": [ { "maximum": 23, "minimum": 6, "type": "integer" }, { "maximum": 335, "minimum": 318, "type": "integer" } ] }, "ttxt_descriptors": { "properties": { "page": { "description": "Page number of the teletext received from an SDI card.\nIt is defined according to ETS 300 706 teletext specification.\n\nThe information about the pages is received from the stream provider.\n", "type": "integer", "x-primary-key": true, "example": 100 }, "lang": { "description": "The language code of the teletext.", "anyOf": [ { "$ref": "#/components/schemas/language_value" } ] }, "type": { "description": "Teletext page type defined according to the Specification for Service Information (SI) in DVB systems, 6.2.32 Teletext descriptor in EN 300 468 Digital Video Broadcasting (DVB).", "enum": [ "initial", "subtitle", "impaired" ], "type": "string", "example": "initial" } }, "required": [ "page", "lang", "type" ], "type": "object" }, "vbi_service": { "enum": [ "ttxt" ], "type": "string" }, "srt_config": { "allOf": [ { "type": "object", "properties": { "port": { "description": "Listening port or a `host:port` pair for the SRT configuration.\nMust be unique on the whole server.\n", "allOf": [ { "$ref": "#/components/schemas/listen_spec" } ], "example": 9050 }, "v": { "description": "What implementation to use.\n", "oneOf": [ { "const": "srt1", "description": "libsrt bindings" }, { "const": "srt2", "description": "erlang implementation" } ], "x-private": true }, "timeout": { "description": "Data transmission timeout in seconds. \nIf set to `false` then data transmission time is unlimited. This is a defalut behavior.\n", "anyOf": [ { "type": "integer", "format": "seconds", "x-format-description": "seconds" }, { "enum": [ false ], "type": "boolean" } ], "x-notice": "SRTO_RCVTIMEO SRTO_SNDTIMEO (ms, -1 no limit)", "example": 10 } } }, { "$ref": "#/components/schemas/srt_config_base" } ] }, "srt_config_base": { "type": "object", "properties": { "minversion": { "description": "The minimum SRT version that is required from the peer for SRT publication.\n", "type": "string", "pattern": "^[0-9]+\\.[0-9]+\\.[0-9]+$", "example": "1.1.0" }, "version": { "description": "Required SRT version.\n", "type": "string", "pattern": "^[0-9]+\\.[0-9]+\\.[0-9]+$", "example": "1.3.0" }, "enforcedencryption": { "description": "Whether both connection parties must have the same password set (including empty, in other words, with no encryption). \nIf the passwords do not match or only one side is unencrypted, the connection is rejected. \nThe default value is `true`.\n", "type": "boolean", "example": true }, "passphrase": { "description": "The password for the encrypted transmission. \nIts length should be not less than 10 and not more than 79 characters.\n\nUnlike [password](https://flussonic.com/doc/api/reference/#tag/stream/operation/stream_save%7Cbody%7Cpassword),\nthe passphrase is not transmitted openly but is used to encrypt the key that is transmitted by the Caller\nand decoded by Listener.\n", "type": "string", "minLength": 10, "maxLength": 79, "example": "9876543210", "x-notice": "SRTO_PASSPHRASE (\"\")" }, "linger": { "description": "The time, in seconds, that the socket waits for the unsent data before closing. \nThe default value is 180.\n", "type": "integer", "format": "seconds", "x-notice": "SRTO_LINGER on, (180s) (off - 0s)", "example": 15, "x-format-description": "seconds" }, "latency": { "description": "The latency value for both directions of the socket.\nBy default initial value latency is 0 when transmitting and 120ms when receiving video.\nActual value established after connection handshake.\nIncreased value helps tolerate network losses and delays.\n", "type": "integer", "format": "milliseconds", "x-notice": "SRTO_LATENCY", "example": 150, "x-format-description": "milliseconds" }, "streamid": { "description": "A string of maximum 512 characters set on the socket before the connection. \n\nThis string is a part of a callback that is sent by the caller and regisered by the listener. \nBased on this information the listener can accept or reject the connection, select the desired data stream, or set an appropriate passphrase for the connection.\n\nIts format is `#!::` optionally followed by the parameters:\n* `r=` - stream name\n* `m=` - mode expected for the connection: `publish` (if the caller wants to send the stream data) or `request` (if the caller wants to receive the stream).\n* `password=` - a password for the authorization in a publish session (not recommended, better use `passphrase` option instead)\n\nDuring SRT sessions the following parameters are automaticly added to streamid:\n* `s=` - the identifier of a session\n* `a=` - Flussonic version\n\nNOTE: you can specify a string in the format you need; to disable this extension, you need specify empty string.\n", "type": "string", "maxLength": 512, "example": "#!::r=my-stream,m=publish" } } }, "mpegts_lang_track": { "anyOf": [ { "enum": [ "default" ], "type": "string" }, { "type": "string" } ] }, "audio_track": { "type": "object", "required": [ "channels" ], "properties": { "track": { "description": "The audio track name in Media Server.", "type": "string", "example": "a1", "x-primary-key": true }, "sample_type": { "description": "The audio track format.", "type": "string", "oneOf": [ { "const": "pcm" }, { "const": "smpte337" } ], "default": "pcm" }, "channels": { "description": "The list of SDI audio channels from which you want to assemble the audio track.", "type": "array", "items": { "type": "integer" } }, "lang": { "description": "The audio track language.", "anyOf": [ { "$ref": "#/components/schemas/language_value" } ] } } }, "push_audio_track": { "properties": { "track": { "description": "The audio track name.", "type": "string", "example": "a1" }, "sample_type": { "description": "The audio track output format.", "type": "string", "oneOf": [ { "const": "pcm" } ], "default": "pcm" }, "channels": { "description": "The list of SDI audio channel numbers to which the audio track shall be pushed.", "type": "array", "items": { "type": "integer" } } }, "type": "object", "required": [ "track", "channels" ] }, "stream_dvr_specific_spec": { "type": "object", "properties": { "reference": { "description": "Stream can refer to the globally declared DVR. This option referres to a single DVR entry.", "type": "string", "format": "dvr_name", "example": "localdvr0", "x-format-description": "dvr_name" }, "remotes": { "description": "The address of the source from which Media server will read the archive. This address will not be used for capturing live video, it is strictly for data exchange on the availability of the archive and the transmission of segments.", "type": "array", "items": { "type": "string", "format": "dvr_url", "pattern": "^(m4f|m4fs|m4s|m4ss|hls)://.*$", "examples": { "default": { "value": "m4f://clusterkey@secondserver/otherstream" } }, "x-format-description": "dvr_url" } } } }, "stream_dvr_spec": { "allOf": [ { "$ref": "#/components/schemas/stream_dvr_specific_spec" }, { "$ref": "#/components/schemas/dvr_base_config" } ] }, "subtitle_style": { "type": "object", "properties": { "align": { "description": "Horizontal alignment of subtitles.\nAllowed values: `left`, `center`, `right`.\n", "type": "string", "example": "middle" }, "valign": { "description": "Vertical alignment of subtitles.\nAllowed values: `top`, `middle`, `bottom `.\n", "type": "string", "example": "bottom" } } }, "transponder_pid": { "type": "object", "properties": { "pid": { "description": "A PID to assign to a matched track or system table.\nPID identifies the payload (media or service) in the resulting MPTS stream.\n\nMultiplexer will include only the tracks with specified PIDs.\n\nIt is possible to set PID values for video, audio and other media tracks, as well as for PMT and SDT.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/multiplex-several-streams-into-mpts-for-contribution/#choosing-output-tracks" }, "allOf": [ { "$ref": "#/components/schemas/ts_pid" } ], "x-primary-key": true, "openmetrics_label": "pid" }, "content": { "description": "Content of the track.", "enum": [ "system", "video", "audio", "application", "metadata" ], "type": "string", "example": "audio" }, "codec": { "description": "Codec for this pid. Use this if just content+track is not enough.", "allOf": [ { "$ref": "#/components/schemas/frame_codec" } ], "example": "scte35" }, "track": { "description": "Index of a track with specified content and maybe codec.\nE.g. when you specify content=audio and track=2 the second audio track will be selected.\nIf you specify content=audio, codec=aac and track=1, the first AAC track will be selected\neven if it is a third audio track and fifth track in stream media_info\n", "type": "integer", "example": 1 }, "bitrate": { "description": "Bitrate of the track.", "type": "integer", "format": "speed", "example": 2543, "x-format-description": "speed" }, "stream_type": { "description": "Custom program element type in PMT\nCombined with bypass (content=metadata, track=0) this option makes\nproprietary PSI streams appear in PMT program info with given stream_type field\n\nThis is an optional parameter for advanced users. By default, the media Server\nis automatically sets the stream_type according to the track codec.\n", "type": "integer", "minimum": 1, "maximum": 255, "example": 12 }, "es_info": { "description": "Raw elementary stream descriptors to describe proprietary stream in PMT program info\nPlese refer to ISO/IEC 13818-1 section 2.6 for syntax\n\nThis is an optional parameter for advanced users. By default, the media Server\nis automatically sets the stream_type according to the track codec.\n", "type": "string", "format": "hexbinary", "example": "52010D", "x-format-description": "hexbinary" }, "stats": { "description": "Detailed runtime information about the multiplexer pid.", "allOf": [ { "$ref": "#/components/schemas/transponder_pid_stats" } ], "readOnly": true } }, "required": [ "pid", "content", "track" ] }, "transponder_pid_stats": { "type": "object", "properties": { "payload": { "description": "The payload bytes count.", "type": "integer", "format": "bytes", "openmetrics_metric": "pid_payload", "x-metric-type": "counter", "x-format-description": "bytes" }, "fillers": { "description": "The filler bytes count.", "type": "integer", "format": "bytes", "openmetrics_metric": "pid_fillers", "x-metric-type": "counter", "x-format-description": "bytes" }, "stuffing": { "description": "The stuff packets count.", "type": "integer", "openmetrics_metric": "pid_stuffing", "x-metric-type": "counter" }, "trimmed_bytes": { "description": "The trimmed bytes count.", "type": "integer", "format": "bytes", "openmetrics_metric": "pid_trimmed_bytes", "x-metric-type": "counter", "x-format-description": "bytes" }, "trimmed_frames": { "description": "The trimmed frames count.", "type": "integer", "openmetrics_metric": "pid_trimmed_frames", "x-metric-type": "counter" } } }, "vision_spec": { "type": "object", "properties": { "alg": { "description": "The algorithm used for video analytics.\n", "type": "string", "oneOf": [ { "const": "faces", "description": "The algorithm for face recognition is used." }, { "const": "plates", "description": "The algorithm for license plate recognition is used." } ], "example": "faces", "x-api-allow": [ "vision-config-external", "vision", "central", "watcher-admin", "watcher-client" ] }, "areas": { "description": "This parameter allows you to select specific polygonal area(s) for detection.\nBy default, it is empty, and the recognition system searches over the entire camera field of view.\n\nEach area is specified as a sequence of comma-separated coordinates of vertices of the polygon: `x0,y0,x1,y1,x2,y2,...`.\nThe vertices are specified in a counter-clockwise direction. Multiple areas are separated by `:`.\n", "type": "string", "x-api-allow": [ "vision-config-external", "vision", "central", "watcher-admin", "watcher-client" ] } } }, "auth_spec": { "type": "object", "properties": { "url": { "description": "The URL of an HTTP backend.", "allOf": [ { "$ref": "#/components/schemas/auth_url" } ], "example": "http://middleware-address/auth/v2" }, "domains": { "description": "Specifying the domains, within which playing this video is allowed. \nThis does not work for those clients that do not pass the value of Referer HTTP header.\n", "items": { "type": "string" }, "type": "array", "example": [ "mycompany.com" ] }, "max_sessions": { "description": "The maximal number of streams or files the user can view simultaneously.\nThis limitation allows to prevent users from full restreaming to their servers.\n", "type": "integer", "example": 5000 }, "allowed_countries": { "description": "Explicit list of countries (two-letter country codes according to ISO 3166-1) that have access to the content without any other checks. \n\nFlussonic uses the MaxMind GeoLite2 Country database to map a country to a block of IP addresses. \nNew releases of GeoIP2 databases come out more often than the releases of Flussonic server, so sometimes the used database can become outdated. \nTherefore we recommend you to install a separate GeoIP2 library and set up Flussonic to use it.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/restrict-viewer-country-with-geoip/" }, "items": { "type": "string", "format": "iso3166", "x-format-description": "iso3166" }, "type": "array", "example": [ "US", "DE", "GB" ] }, "disallowed_countries": { "description": "Explicit list of countries (two-letter country codes according to ISO 3166-1) that are banned from accessing the content. \n\nFlussonic uses the MaxMind GeoLite2 Country database to map a country to a block of IP addresses. \nNew releases of GeoIP2 databases come out more often than the releases of Flussonic server, so sometimes the used database can become outdated. \nTherefore we recommend you to install a separate GeoIP2 library and set up Flussonic to use it.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/restrict-viewer-country-with-geoip/" }, "items": { "type": "string", "format": "iso3166", "x-format-description": "iso3166" }, "type": "array", "example": [ "US", "DE", "GB" ] }, "soft_limitation": { "description": "If this option is enabled and `max_sessions` limitation is used, the extra sessions are interrupted not immediately, but in 30 or 90 seconds.\nThis can be useful for middlewares that cannot generate a new token for every new stream or file request \nand therefore need time to understand that all sessions are being used.\n", "type": "boolean", "example": false }, "session_keys": { "description": "A list of keys to generate a session ID value, allowing to configure the authorization scheme \nwhich is a hash sum calculated as follows: `hash(name + ip + proto)`.\nThis parameter allows to finish one session and start another one with the same authorization token.\nThe keys `name`,`ip`, and `proto` are required.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/authorize-clients/#auth-session-keys" }, "items": { "allOf": [ { "$ref": "#/components/schemas/session_key" } ] }, "type": "array", "example": [ "name", "token", "proto", "ip" ] }, "extra": { "additionalProperties": { "type": "string" }, "type": "object", "description": "Some additional options." } } }, "pusher_status": { "anyOf": [ { "enum": [ "starting", "pending", "retry", "error" ], "type": "string" }, { "allOf": [ { "$ref": "#/components/schemas/session_status" } ] } ] }, "pusher_standby_status": { "oneOf": [ { "title": "Pusher is sending right now packets, because it does not see any traffic from main source.", "const": "active" }, { "title": "Pusher can see traffic from main source, so it is holding and does not send any packets.", "const": "waiting" } ] }, "play_protocols_spec": { "type": "object", "properties": { "whitelist": { "description": "- If set to `True`, server **allows** a playback only for listed protocols;\n- If set to `False`, server **forbids** a playback only for listed protocols;\n", "default": false, "type": "boolean" }, "hls": { "description": "Whether to allow or deny an HLS stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "cmaf": { "description": "Whether to allow or deny an LL-HLS stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "dash": { "description": "Whether to allow or deny a DASH stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "player": { "description": "Whether to allow or deny playback in embed.html, depending on the `whitelist` properties.", "type": "boolean" }, "mss": { "description": "Whether to allow or deny an MSS stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "rtmp": { "description": "Whether to allow or deny an RTMP stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "rtsp": { "description": "Whether to allow or deny an RTSP stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "m4f": { "description": "Whether to allow or deny an M4F stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "m4s": { "description": "Whether to allow or deny an M4S stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "mseld": { "description": "Whether to allow or deny an MSE-LD stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "tshttp": { "description": "Whether to allow or deny an MPEG-TS stream playback over HTTP(S), depending on the `whitelist` properties.", "type": "boolean" }, "webrtc": { "description": "Whether to allow or deny an WebRTC stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "srt": { "description": "Whether to allow or deny an SRT stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "shoutcast": { "description": "Whether to allow or deny a SHOUTcast/Icecast stream playback, depending on the `whitelist` properties.", "type": "boolean" }, "mp4": { "description": "Whether to allow or deny an MP4 file download over HTTP(S), depending on the `whitelist` properties.\nUsed to export DVR segment(s) in MP4 file.\n", "type": "boolean" }, "jpeg": { "description": "Whether to allow or deny delivering JPEG thumbnails over HTTP(S), depending on the `whitelist` properties.", "type": "boolean" }, "api": { "description": "Whether to allow or deny API requests, depending on the `whitelist` properties.", "type": "boolean" } } }, "stream_config": { "allOf": [ { "$ref": "#/components/schemas/stream_config_specific" }, { "$ref": "#/components/schemas/stream_config_base" }, { "$ref": "#/components/schemas/stream_config_input" }, { "$ref": "#/components/schemas/stream_config_media" }, { "$ref": "#/components/schemas/stream_config_onpremises" }, { "$ref": "#/components/schemas/stream_config_single_media" }, { "$ref": "#/components/schemas/stream_config_deprecated" }, { "$ref": "#/components/schemas/stream_config_additional" } ] }, "stream_config_specific": { "type": "object", "properties": { "name": { "type": "string", "format": "media_name", "description": "Globally unique stream name.", "readOnly": true, "x-primary-key": true, "openmetrics_label": "name", "examples": { "default": { "value": "hockey1" }, "mylive/bunny": { "value": "mylive/bunny" }, "decklink": { "value": "Decklink-Stream" }, "dektec": { "value": "Dektec-Stream" }, "test_stream": { "value": "test_stream" } }, "x-api-allow": [ "watcher-client", "watcher-core", "watcher-admin", "vision-config-external", "smartcam", "central-layouter", "vision" ], "x-format-description": "media_name" }, "comment": { "description": "Human-readable description of the stream.\n", "type": "string", "example": "This is a test stream", "x-api-allow": [ "watcher-client", "watcher-core", "watcher-admin" ] }, "title": { "description": "Human-readable title of the stream. Provided for SDT MPEG-TS table or\nSDP RTSP title parameter.\n", "type": "string", "example": "Hockey channel", "x-api-allow": [ "watcher-client", "watcher-core", "watcher-admin" ] }, "recheck_secondary_inputs_interval": { "description": "How often to re-check secondary inputs. If this option is not set than check is never performed.", "type": "integer", "format": "seconds", "example": 120, "x-format-description": "seconds" } }, "required": [ "name" ] }, "stream_config_base": { "type": "object", "properties": { "static": { "default": true, "description": "Whether a stream is `static` or not. \nIf set to `True` the server will try to keep this stream running even if\nthere are no viewers or errors encountered.\n\nStreamer restarts *all* `static` streams even if any internal errors occur\nand the `static` streams crash.\n", "type": "boolean", "example": true, "x-api-allow": [ "watcher-core", "watcher-client", "watcher-admin" ] } } }, "stream_config_input": { "type": "object", "properties": { "inputs": { "description": "List of stream inputs. \n***Important:*** A stream without any inputs can receive video frames **only** if backup file is specified.\n", "items": { "allOf": [ { "$ref": "#/components/schemas/stream_input" } ] }, "type": "array", "x-api-allow": [ "smartcam", "watcher-core", "vision-config-external", "watcher-client", "watcher-admin", "central-layouter" ] }, "password": { "description": "Specify a password when publishing a password-protected stream.\n\nThe password is passed unencrypted in a query string. \nSome protocols may additionally offer built-in tools for stream protection, \nfor example you can use `passphrase` for SRT publications.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/publish-video-on-media-server/#live-publish-on_publish" }, "type": "string" }, "input_media_info": { "description": "Use this option for fine-grained control over each input track.\n\nYou can select, rename, change name and title for each video, audio track.\n", "allOf": [ { "$ref": "#/components/schemas/input_media_info" } ] }, "provider": { "description": "Human-readable name of the content provider. Used, for example, for MPEG-TS.\n\nDeprecated, use `input_media_info.provider` instead\n", "type": "string", "example": "SportsTV", "deprecated": true, "x-delete-at": 25.03 }, "dvbocr": { "description": "This parameter allows to manage subtitles in an output stream.\n", "oneOf": [ { "const": "replace", "description": "An output stream will have a track containing subtitles converted to a text format (WebVTT)." }, { "const": "add", "description": "An output stream will have two tracks containing subtitles: \nthe original track with subtitles in DVB and a new track with text subtitles.\n" } ], "example": "replace" }, "source_timeout": { "description": "If a connected source does not send any data within this timeout period (in seconds), \nthe source connection is considered to be lost.\nThis is a default configuration for a stream, can be modified for any input.\n", "anyOf": [ { "type": "integer", "format": "seconds", "x-format-description": "seconds" }, { "enum": [ false ], "type": "boolean" } ], "example": 10, "x-api-allow": [ "watcher-core" ] }, "video_timeout": { "description": "If a connected source does not send video data within this timeout period (in seconds), \nthe source connection is considered to be lost.\nThis is a default configuration for a stream, can be modified for any input.\n", "type": "integer", "format": "seconds", "example": 20, "x-api-allow": [ "watcher-core" ], "x-format-description": "seconds" }, "audio_timeout": { "description": "If a connected source does not send audio data within this timeout period (in seconds), \nthe source connection is considered to be lost.\nThis is a default configuration for a stream, can be modified for any input.\n", "type": "integer", "format": "seconds", "example": 20, "x-api-allow": [ "watcher-core" ], "x-format-description": "seconds" }, "max_retry_timeout": { "description": "The maximum time that Media Server will set for attempts to reconnect to sources when source problems occur.\nThe time between attempts is not linear and may increase if source problems are not fixed. This parameter limits this value, but the time itself between attempts may be longer.\n", "type": "integer", "format": "seconds", "example": 30, "minimum": 1, "x-api-allow": [ "watcher-core" ], "x-format-description": "seconds" }, "silencedetect": { "x-private": true, "description": "Configuration of silence detection for the stream.", "allOf": [ { "$ref": "#/components/schemas/silencedetect_spec" } ] }, "motion_detector": { "x-private": true, "description": "Configuration of motion detection for a stream.", "allOf": [ { "$ref": "#/components/schemas/motion_detector_spec" } ], "x-api-allow": [ "watcher-core" ] }, "backup": { "description": "When all inputs are down, this can be used to show at least something to users.\nIt is important to understand that backup video behaves differently, not as inputs. \nIt is not a _last input_ in the list. After any input stops sending frames, timer starts.\nAfter `source_timeout` seconds backup starts working, while all other inputs are still trying to\nconnect and start working.\nSo backup and all inputs are working in parallel.\n", "allOf": [ { "$ref": "#/components/schemas/backup_config" } ], "x-api-allow": [ "watcher-core" ] }, "epg_enabled": { "description": "Whether to extract EPG from the input.", "type": "boolean", "example": true }, "nomedia": { "x-private": true, "description": "The stream does not expected to have video or audio tracks.\nWe need this flag to receive mpegts consisting of EIT PSIs only and pass it to mutiplexor.\n", "default": false, "type": "boolean" }, "mpegts_ac3": { "description": "It allows to specify pack information about ac3 for outgoing MPEGTS-TS streams. The default value is `system_b`.", "allOf": [ { "$ref": "#/components/schemas/output_mpegts_ac3" } ] } } }, "stream_config_media": { "type": "object", "properties": { "clients_timeout": { "description": "Stream's lifetime after the last client was disconnected (can be expressed in *seconds* or set to `False`). \nApplicable to on-demand streams **only**. \n", "anyOf": [ { "type": "integer" }, { "type": "boolean" } ], "example": 485, "x-api-allow": [ "watcher-core" ] }, "retry_limit": { "description": "Number of attempts for the server to reconnect to a data source.\nApplicable to on-demand streams **only**. If not defined, server will constantly try to reconnect (unlimited number of retries). \nIf the input does not become active after specified amount of attempts, stream shuts down till the next user request.\n", "type": "integer", "x-api-allow": [ "watcher-core" ] }, "transcoder": { "description": "Configuration of the transcoder settings.", "allOf": [ { "$ref": "#/components/schemas/transcoder_opts" } ], "examples": { "mylive/bunny": { "value": {} } }, "x-api-allow": [ "smartcam", "central-layouter" ] }, "logo": { "x-private": true, "x-notice": "not documented yet", "description": "Overlay logo.", "allOf": [ { "$ref": "#/components/schemas/web_logo_spec" } ] }, "thumbnails": { "description": "Configuration of thumbnails generator.", "allOf": [ { "$ref": "#/components/schemas/thumbnails_spec" } ], "x-api-allow": [ "watcher-core" ] }, "jpeg_snapshot_sign_key": { "description": "A key to sign jpeg_snapshot requests", "type": "string", "x-api-allow": [ "watcher-core" ] }, "dvr": { "description": "DVR configuraton.", "allOf": [ { "$ref": "#/components/schemas/stream_dvr_spec" } ], "x-api-allow": [ "watcher-core", "watcher-admin", "watcher-client", "central-layouter" ] }, "on_play": { "description": "Configuration of authorization backend for play sessions.", "externalDocs": { "description": "Find more information about `on_play` and `on_publish` here.", "url": "https://flussonic.com/doc/authorize-clients/#auth-on_play-on_publish" }, "allOf": [ { "$ref": "#/components/schemas/auth_spec" } ], "x-api-allow": [ "watcher-core" ] }, "on_publish": { "description": "Configuration of authorization backend for publish sessions.", "allOf": [ { "$ref": "#/components/schemas/auth_spec" } ] }, "drm": { "description": "Configuraton of Digital Rights Management system (DRM).", "allOf": [ { "$ref": "#/components/schemas/drm_spec" } ] }, "protocols": { "description": "Configuration to allow/forbid playing the stream via various protocols. \n- If the `whitelist` option is set to 'true', the server allows a playback only for listed protocols;\n- If the `whitelist` option is set to 'false', the server forbids a playback for listed protocols;\n- Server allows a playback for all the protocols by default.\n", "allOf": [ { "$ref": "#/components/schemas/play_protocols_spec" } ] }, "prepush": { "description": "The time (in seconds) that Media Server reserves for preloading the data, i. e. *buffering*.\nPrepush is always defined through GoP, but this option provides you with a more flexible way\nto configure the buffer size, e. g. a 1-3 or 7-10 seconds time interval.\n\nThe bigger the buffer size, the better the user experience is for the users\nwith a bad internet connection. However, the latency also increases.\n\nIf set to `False` to remove the latency, the stream's start time \nincreases. To decrease it, reduce the GoP size and make the bitrate higher \nor the video quality lower.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/technology-glossary/#glossary-prepush" }, "anyOf": [ { "type": "boolean" }, { "type": "integer" } ], "example": false, "x-api-allow": [ "watcher-core" ] }, "cmaf_enabled": { "x-private": true, "x-notice": "this should be opt-out in `protocols`, not here", "description": "Whether CMAF is enabled for the HLS protocol.", "type": "boolean", "example": true, "deprecated": true, "x-deleted-at": 24.05 }, "segment_count": { "description": "Number of segments stored in memory for the segment-based protocols, such as HLS and DASH.\nAdded to HLS live manifest. Do not forget that one more segment is stored for stale clients\nthat come too late, but the latest segment is not shown in the manifest.\n", "type": "integer", "example": 4 }, "segment_duration": { "description": "The time of the segment duration. Used for the protocols like HLS or DASH. \nThe disk config offers this value in seconds.\n", "allOf": [ { "$ref": "#/components/schemas/segment_duration" } ], "example": 5000 }, "chunk_duration": { "description": "Chunk duration in LL-HLS manifest to be used for tunning latency.", "type": "integer", "format": "milliseconds", "example": 200, "x-format-description": "milliseconds" }, "dash_update_period": { "description": "The option allows to override \"minimumUpdatePeriod\" attribute in DASH manifest.\nIn fact the option controls how often a client will request an updated manifest.\nPlease notice that it may broke playback, we recommend use it at your own risk.\n", "type": "integer", "format": "milliseconds", "example": 270000000, "x-private": true, "x-format-description": "milliseconds" }, "url_prefix": { "description": "A string starting the addresses of separate segments within a *segment-based* playlists (HLS or DASH).\nEach sub-playlist is stored on Media Server.\n\nIf set to `false`, the configured value in a template will be disabled. \n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/configure-similar-streams-with-templates/#global-options-of-streams" }, "allOf": [ { "$ref": "#/components/schemas/url_prefix" } ] }, "hls_scte35": { "description": "Whether to enable SCTE-35 ad insertion markers signaling in HLS manifest.\nAd markers can be included in SCTE-35 (`scte35`), AWS (`aws`), EXT-X-DATERANGE (`rfc8216`) formats or not included (`false`).\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/advertisement-scte-markers/", "type": "string" }, "oneOf": [ { "title": "aws", "const": "aws", "description": "AWS format" }, { "title": "scte35", "const": "scte35", "description": "ANSI SCTE35 format" }, { "title": "rfc8216", "const": "rfc8216", "description": "Apple RFC 8216 EXT-X-DATERANGE format" } ], "example": "scte35" }, "add_audio_only": { "description": "Whether to add an audio-only version of an HLS stream. \nUsed to create App Store compliant HLS streams to deliver the content to Apple iOS devices. \n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/what-is-live-stream/#live-audio_only_hls" }, "type": "boolean", "example": true }, "substyle": { "x-private": true, "x-notice": "This option should be renamed to something more clear", "description": "Subtitles style configuration.", "allOf": [ { "$ref": "#/components/schemas/subtitle_style" } ] }, "webrtc_abr": { "description": "WebRTC play configuration for a stream.", "allOf": [ { "$ref": "#/components/schemas/webrtc_abr_opts" } ] }, "pushes": { "description": "A list of pushes. When a server initiates the connection and sends a stream \nto other server(s), it is called a `push`. \n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/push-video-from-media-server/" }, "items": { "allOf": [ { "$ref": "#/components/schemas/stream_push" } ] }, "type": "array", "x-api-allow": [ "watcher-core", "watcher-admin" ] }, "mpegts_pids": { "description": "This parameter sets PIDs values for outgoing MPEG-TS streams. PID contains information about the TS package content and can be decoded according to special service tables. \nIt is possible to set PID values for PMT, SDT, video, and audio tracks. Tracks are numbered starting from one. \nThe code `a1=123` sets a PID value for the first audio track. It is possible to set the base index for the tracks of a certain type using the 0 (zero) index. \nFor example, `t0=100` sets PID=101 for the first track, 102 for the second, and so on. Numbers can be given in decimal form (by default) or hexadecimal with 16# prefix. \n", "allOf": [ { "$ref": "#/components/schemas/output_mpegts_pids" } ] }, "labels": { "type": "object", "additionalProperties": { "type": "string", "maxLength": 40, "minLength": 1 }, "maxItems": 10, "x-key-type": "string", "description": "Stream labels in key value format.", "x-api-description": { "central": "Stream labels in key value format.\n\nSee [Layouter schema](https://flussonic.com/doc/api/layouter/#tag/stream/operation/streams_list/response%7Cstreams%7Clabels)\nfor more details.\n", "central-layouter": "Stream labels in key value format.\nYou can use labels to control the desired stream layout.\n\nNote, that if you are using a multi-stream agent (i.e. multiple streams use the same agent in their inputs),\nthen the labels of these streams **must** be the same. Otherwise, one or more streams will not work.\n\nLayouter can process next labels:\n\n- With `required_` prefix. If stream has `required_x=y` label, layouter will provision stream only to nodes with label `x=y`.\nIf there are no available nodes with label `x=y`, then the stream will not be provisioned. \n\n**Use cases**\n\nSuppose you have a server used to test new hardware models, and you want the new streams to be provisioned only to the test nodes.\nTo provide this layout you can add the `required_env=test` label for stream and the `env=test` label for test nodes.\n" }, "examples": [ { "key1": "value1", "key2": "value2" }, { "required_env": "test", "location": "eu" } ], "x-api-allow": [ "watcher-core", "central-layouter", "watcher-admin" ] }, "playback_headers": { "description": "This parameter sets playback HTTP headers for streams.\n", "items": { "$ref": "#/components/schemas/playback_headers" }, "maxItems": 10, "type": "array" } } }, "stream_config_onpremises": { "type": "object", "properties": { "debug_stream": { "x-private": true, "description": "Configuration of recording the stream sessions data. Recommended for debugging needs **only**.", "allOf": [ { "$ref": "#/components/schemas/debug_stream_spec" } ] }, "cache": { "description": "Configuration of DVR cache.", "allOf": [ { "$ref": "#/components/schemas/cache_spec" } ] }, "vision": { "allOf": [ { "$ref": "#/components/schemas/vision_spec" } ], "description": "Video analytics parameters.", "x-api-allow": [ "vision-config-external", "smartcam", "vision", "watcher-core", "watcher-admin", "watcher-client", "central-layouter" ] } } }, "stream_config_single_media": { "type": "object", "properties": { "srt_publish": { "description": "SRT publishing configuration for a stream.", "allOf": [ { "$ref": "#/components/schemas/srt_config" } ] }, "srt2_publish": { "description": "SRT2 publishing configuration for a stream.", "allOf": [ { "$ref": "#/components/schemas/srt_config" } ] } } }, "stream_config_deprecated": { "type": "object", "properties": {} }, "stream_config_additional": { "type": "object", "properties": {} }, "push_counters": { "type": "object", "properties": { "url": { "description": "Obfuscated URL where to push to", "type": "string", "format": "input_url", "x-format-description": "input_url" }, "opened_at": { "type": "integer", "format": "utc_ms", "description": "The time in milliseconds when the pusher instance was created.", "x-format-description": "Unix timestamp in milliseconds", "minimum": 1000000000000, "maximum": 10000000000000 }, "status": { "description": "State of the push session.", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/sessions-in-media-server/#events-and-session-states" }, "allOf": [ { "$ref": "#/components/schemas/pusher_status" } ] }, "standby_status": { "description": "State of the standby push.", "allOf": [ { "$ref": "#/components/schemas/pusher_standby_status" } ] }, "bytes": { "type": "integer", "format": "bytes", "description": "Total amount of bytes sent since the pusher was created.", "x-format-description": "bytes" }, "frames": { "type": "integer", "description": "Number of frames sent by this pusher.\n" }, "segments": { "type": "integer", "description": "Number of segments sent by this pusher.\n" }, "pusher_restarts": { "description": "How many times pusher was restarted", "type": "integer" }, "errors_stop_overloaded": { "description": "How many times pusher was stopped due to overload", "type": "integer" }, "errors_dropped_frames": { "description": "Number of dropped frames", "type": "integer" }, "errors_dropped_segments": { "description": "Number of dropped segments", "type": "integer" }, "pids": { "type": "array", "items": { "$ref": "#/components/schemas/push_pid_counters" }, "description": "Per pid statistics for MPEG-TS encoding calculated for the pusher\n" }, "sys_fillers_bytes": { "description": "The fillers bytes count for system traffic.", "type": "integer", "format": "bytes", "x-format-description": "bytes" }, "sys_payload_bytes": { "description": "The payload bytes count for system traffic.", "type": "integer", "format": "bytes", "x-format-description": "bytes" }, "sys_stuffing_packets": { "description": "The stuff packets count for system traffic.", "type": "integer" }, "encoded_bytes": { "description": "The encoded bytes count.", "type": "integer", "format": "bytes", "x-format-description": "bytes" }, "resent_packets": { "description": "Number of retries since the last successful push.", "type": "integer" }, "errors_device_not_opened": { "description": "How much times pusher was unable to open (attach to) device for pushing", "type": "integer" }, "errors_device_buffer_overflow": { "description": "How many times internal device buffer was overflowed.", "type": "integer" }, "errors_audio_frame_decode": { "description": "Number of errors during of audio frame decoding to raw format.", "type": "integer" }, "errors_video_frame_decode": { "description": "Number of errors during of video frame decoding to raw format.", "type": "integer" }, "errors_no_destination": { "description": "How many times pusher did not establish connection because of destination peer is not reached", "type": "integer" }, "errors_tls": { "description": "How many times pusher got TLS errors.", "type": "integer" }, "errors_connection_lost": { "description": "How many times pusher unexpectedly lost connection with peer", "type": "integer" }, "errors_401": { "type": "integer", "description": "How many times we've got 401 (unauthorized).\n" }, "errors_403": { "type": "integer", "description": "How many times we've got 403 (forbidden).\n" }, "errors_404": { "type": "integer", "description": "How many times we've got 404 (enoent).\n" }, "errors_409": { "type": "integer", "description": "How many times we've got 409 (double_publish_denied).\n" }, "errors_500": { "type": "integer", "description": "How many times we've got 500 (server_error).\n" }, "errors_redirect_limit": { "type": "integer", "description": "How many times pusher was stopped because of redirect limit is reached\n" }, "errors_not_authorized": { "description": "How many times pusher did not establish connection because of missing or wrong credentials", "type": "integer" }, "genlock_status": { "description": "SDI card output clock-lock state.", "allOf": [ { "$ref": "#/components/schemas/genlock_status" } ] }, "genref_status": { "description": "SDI card reference port (Ref In Port) status.", "allOf": [ { "$ref": "#/components/schemas/genref_status" } ] } } }, "push_pid_counters": { "type": "object", "required": [ "pid" ], "properties": { "pid": { "description": "Related MPEG-TS pid", "type": "integer" }, "pnr": { "description": "What program does have this pid", "type": "integer" }, "packets": { "description": "How many MPEG-TS packets with 188 bytes on this pid produced\n", "type": "integer" }, "payload_bytes": { "description": "The payload bytes count.", "type": "integer", "format": "bytes", "x-format-description": "bytes" }, "fillers_bytes": { "type": "integer", "format": "bytes", "description": "How many bytes were seen in NAL fillers\n", "x-format-description": "bytes" }, "stuffing_packets": { "description": "The stuff packets count.", "type": "integer" }, "trimmed_bytes": { "description": "The trimmed bytes count.", "type": "integer", "format": "bytes", "x-format-description": "bytes" }, "trimmed_frames": { "description": "The trimmed frames count.", "type": "integer" } } }, "silencedetect_spec": { "type": "object", "required": [ "noise" ], "properties": { "duration": { "description": "The duration, in seconds, of a continuous time interval during which silence must last for Flussonic to generate the `audio_silence_detected` event.", "type": "integer", "format": "seconds", "example": 20, "x-format-description": "seconds" }, "interval": { "description": "Flussonic will keep sending the `audio_silence_detected` event once upon the specified time interval until the sound reappears in the source.", "type": "integer", "example": 10 }, "noise": { "description": "The threshold value of the sound level, in dB. \nSound of this and lower level will be considered by Flussonic as silence.\n", "type": "number", "format": "decibels", "example": -30, "x-format-description": "decibels" } } }, "vbi_lines": { "type": "object", "properties": { "service": { "description": "The service information passed to VBI of the output analog stream.\nThe allowed value is `ttxt` - teletext.\n", "allOf": [ { "$ref": "#/components/schemas/vbi_service" } ] }, "lines": { "description": "Numbers of VBI lines that will carry a teletext track.", "items": { "allOf": [ { "$ref": "#/components/schemas/vbi_line" } ] }, "type": "array" } } }, "scale_algorithm": { "enum": [ "fast_bilinear", "bilinear", "bicubic", "experimental", "neighbor", "area", "bicublin", "gauss", "sinc", "lanczos", "spline" ], "type": "string" }, "web_logo_spec": { "type": "object", "properties": { "height": { "description": "Set the specified height for the logo.", "type": "integer", "example": 100 }, "width": { "description": "Set the specified width for the logo.", "type": "integer", "example": 200 }, "left": { "description": "Change the position of the logo to the left.", "type": "integer", "example": 15 }, "top": { "description": "Change the position of the logo to the top.", "type": "integer", "example": 15 }, "right": { "description": "Change the position of the logo to the right.", "type": "integer" }, "bottom": { "description": "Change the position of the logo to the bottom.", "type": "integer" } } }, "network_port": { "maximum": 65535, "minimum": 0, "type": "integer" }, "listen_spec": { "anyOf": [ { "allOf": [ { "$ref": "#/components/schemas/network_port" } ] }, { "type": "string", "format": "hostport", "x-format-description": "Hostname with port" } ] }, "session_key_query": { "format": "query_session_key", "type": "string", "x-format-description": "query_session_key" }, "input_stats": { "allOf": [ { "type": "object", "properties": { "ip": { "type": "string", "description": "IP address of the connected peer.", "example": "172.16.25.73" }, "proto": { "allOf": [ { "$ref": "#/components/schemas/protocol" } ], "description": "Protocol used for the data transmission in the session.", "example": "dash" }, "opened_at": { "type": "number", "format": "utc_ms", "description": "The time when this session was created.", "example": 1637094994000, "x-format-description": "Unix timestamp in milliseconds", "minimum": 1000000000000, "maximum": 10000000000000 }, "media_info": { "allOf": [ { "$ref": "#/components/schemas/media_info" } ], "description": "Technical description of the input content.\n" }, "ts_delay": { "type": "number", "format": "ticks", "example": 1284, "description": "The time period during which no frames were received from the stream's input.\n", "x-format-description": "ticks" }, "ts_delay_per_tracks": { "type": "array", "items": { "type": "number", "format": "ticks", "x-format-description": "ticks" }, "example": [ 1284 ], "description": "The time period during which no frames were received per each track according to `media_info`\n" }, "url": { "type": "string", "format": "url", "description": "Final URL after redirects.\n\nDeprecated because was never actually used.\n", "example": "udp://239.0.0.1:1234", "deprecated": true, "x-delete-at": 25.03, "x-format-description": "url" }, "user_agent": { "type": "string", "description": "Client's user agent for selected protocol.", "example": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML. like Gecko) Chrome/90.0.4430.72 Safari/537.36" }, "active": { "description": "Whether this input is selected as active for the stream.", "type": "boolean", "example": true }, "dvr_info": { "description": "Information about DVR that this input has\n", "allOf": [ { "$ref": "#/components/schemas/dvr_info" } ] } } }, { "$ref": "#/components/schemas/input_counters" } ] }, "input_counters": { "type": "object", "description": "Here are grouped different counters for sessions: generic and errors\n", "properties": { "bytes": { "type": "integer", "format": "bytes", "default": 0, "description": "Number of bytes received by this stream from outside.\nIt will be counted before transcoding and will sum all inputs working together.\n", "openmetrics_metric": "stream_input_bytes", "x-format-description": "bytes" }, "frames": { "type": "integer", "default": 0, "description": "Number of frames passed to this stream from the inputs.\n" }, "retries": { "type": "integer", "description": "How many times has this stream retried to connect to source" }, "media_info_changes": { "type": "integer", "description": "Indicates, how often does media_info changes\n" }, "valid_secondary_inputs": { "description": "Number of secondary inputs that have no problems.", "type": "integer", "example": 2 }, "invalid_secondary_inputs": { "description": "Number of secondary inputs that have some problems.", "type": "integer", "example": 0 }, "resync_count_normal": { "type": "integer", "description": "Stream timestamps are synchronized with real time. This counter tells, how many times\nit was syncronized after source reconnect.\n" }, "resync_count_jump": { "type": "integer", "description": "Source may change timestamps without any signalling. This counter indicates how bad is the source\n" }, "resync_count_drift": { "type": "integer", "description": "Source may send frames timestamps faster or slower than realtime.\nLive stream will catch it and resync. This counter indicates how many times did it happened.\n" }, "reorder_count": { "type": "integer", "description": "Source may send frames not in the order they should be played. It will be catched and reordered.\nThis counter indicates how many times did it happened.\n" }, "ad_splices_ingested": { "type": "integer", "description": "How many Ad markers passed to this stream from the inputs.\n" }, "ad_splices_inserted": { "type": "integer", "description": "How many Ad markers inserted to this stream by user.\n" }, "srt": { "type": "object", "$ref": "#/components/schemas/input_srt_counters" }, "motion_detector": { "type": "object", "$ref": "#/components/schemas/input_motion_detector_counters" }, "errors": { "type": "integer", "description": "Sum of all other specific errors. Can be used for triggering alert on any error\n", "example": 0 }, "errors_lost_packets": { "type": "integer", "description": "RTP, MPEG-TS or other protocols have enough information to tell how many packets were lost\n" }, "errors_decoder_reset": { "type": "integer", "description": "Decoder reset count due to abnormal DTS change. Can happen in MPEG-TS, RTP.\n" }, "errors_broken_payload": { "type": "integer", "description": "Demultiplexing was done right, but content is broken.\n" }, "errors_dropped_frames": { "type": "integer", "description": "Dropped frames count due timestamp adjustment.\n" }, "errors_desync": { "type": "integer", "description": "This can be used as a `TS_sync_loss` - how many times MPEG-TS sync was lost.\n\nAlso this counter refers to RTSP desync, when camera starts dropping TCP data and\nwe have to find packet boundaries.\n\nHere we write count of such resynchronizations.\n" }, "errors_ts_pat": { "type": "integer", "description": "how many times PAT was missing during 0,5 seconds or pid 0 misses PAT\n\n`PAT_error`\n" }, "pids": { "type": "array", "items": { "$ref": "#/components/schemas/input_pid_counters" }, "description": "Per pid statistics calculated for MPEG-TS input\n" }, "rtp_channels": { "type": "array", "items": { "$ref": "#/components/schemas/input_rtp_counters" }, "description": "Per channel statistics calculated for RTP input\n" }, "errors_ts_service_lost": { "type": "integer", "description": "How many times have received PAT that was missing required service (program)\n" }, "errors_ts_stuck_restarts": { "type": "integer", "description": "Number of connection restarts to fix ts_stuck issue. Can happen in RTSP.\n" }, "errors_404": { "type": "integer", "description": "How many times we've got 404 (enoent).\n" }, "errors_403": { "type": "integer", "description": "How many times we've got 403 (eaccess).\n" }, "errors_500": { "type": "integer", "description": "How many times we've got 500 (backend error).\n" }, "errors_crashed": { "type": "integer", "description": "How many times input was restarted due to internal crash.\n\nThis may happen due to unhandled input.\n" }, "sdi": { "type": "object", "$ref": "#/components/schemas/input_sdi_counters" }, "agent": { "type": "object", "$ref": "#/components/schemas/input_agent_counters" } } }, "input_pid_counters": { "type": "object", "required": [ "pid" ], "properties": { "pid": { "description": "Related MPEG-TS pid with following problems\n", "type": "integer" }, "pnr": { "description": "What program does have this pid\n", "type": "integer" }, "packets": { "description": "How many MPEG-TS packets with 188 bytes on this pid received\n", "type": "integer" }, "frames": { "description": "Frame count on this pid\n", "type": "integer" }, "empty_packets": { "description": "Packets without payload and adaptation field\n", "type": "integer" }, "errors_adaptation_broken": { "description": "Packets with adaptation field larger than packet size\n", "type": "integer" }, "errors_ts_scrambled": { "type": "integer", "description": "Amount of scrambled TS packets\n" }, "errors_ts_pmt": { "type": "integer", "description": "how many times PMT was not received after 0,5 seconds\n\n`PMT_error`\n" }, "errors_ts_cc": { "type": "integer", "description": "how many MPEG-TS packets were received with non-contigious contiuity counters.\n\n`Continuity_count_error`\n", "example": 0 }, "errors_ts_tei": { "type": "integer", "description": "How many MPEG-TS packets with Transport Error Indicator were received\n\n`Transport_error`, 2.1\n" }, "errors_ts_psi_checksum": { "type": "integer", "description": "How many times have received PSI entry with broken checksum\n\n`CRC_error`\n" }, "errors_pid_lost": { "type": "integer", "description": "How many times pid has been lost\n" }, "broken_pes_count": { "type": "integer", "description": "How many PES packets were started not from startcode\n" }, "broken_pes_sum": { "type": "integer", "description": "How many bytes were discarded due to lack of PES startcode\n" }, "time_corrections": { "type": "integer", "description": "Jumps of timestamps inside a MPEG-TS stream\n" }, "repeated_frames": { "type": "integer", "description": "In case of CC error last frame can be repeated. This is a count of repeated frames\n" }, "corrected_backward_pts": { "type": "integer", "description": "How many times PTS was less than PCR or previous PTS\n" }, "pcr_resync": { "type": "integer", "description": "If PTS is drifting away from PCR, it can be resynchronized with PCR. This is a resync count\n" }, "discarded_buffer_count": { "type": "integer", "description": "How many times was discarded too big ES buffer without making a frame of it\n" }, "discarded_buffer_sum": { "type": "integer", "description": "How many bytes were lost due to discarding ES buffer\n" }, "fillers_count": { "type": "integer", "description": "How many H264(5) NAL fillers were seen in the input\n" }, "fillers_sum": { "type": "integer", "description": "How many bytes were seen in NAL fillers\n" }, "padding_pes_count": { "type": "integer", "description": "How many PES packets were on the Padding streamId\n" }, "padding_pes_sum": { "type": "integer", "description": "How many bytes were in PES packets on the Padding streamId\n" }, "crashed": { "type": "integer", "description": "Unhandled crashes inside mpegts decoding process due\n" }, "dts_goes_backwards": { "type": "integer", "description": "Time on this PID jumped back from reference PTS and it was not a roll over zero\n" }, "dts_jump_forward": { "type": "integer", "description": "Time on this PID jumped forward too far away from reference PTS\n" }, "too_large_dts_jump": { "type": "integer", "description": "Jump of the PTS was so big from previous, that had to flush all frames and restart parsing\n" } } }, "input_rtp_counters": { "allOf": [ { "$ref": "#/components/schemas/rtp_counters_base" }, { "$ref": "#/components/schemas/h26x_decoder_counters" } ] }, "input_srt_counters": { "type": "object", "description": "SRT specific counters\n", "properties": { "rtt": { "type": "integer", "description": "Round-trip time\n" }, "latency": { "type": "integer", "description": "Receiver buffering delay" }, "packets": { "type": "integer", "description": "Total incoming SRT packets counter\n" }, "retransmitted_packets": { "type": "integer", "description": "How many packets were retransmitted\n" }, "error_lost_packets": { "type": "integer", "description": "How many SRT packets were lost\n" }, "error_dropped_packets": { "type": "integer", "description": "How many SRT packets were dropped by various reasons\n" } } }, "input_motion_detector_counters": { "type": "object", "description": "Specific counters to get insights on current state of getting events from cameras.\nDesigned to be used by analyzers, monitoring and alerting tools\n", "properties": { "motion_detected_count": { "type": "integer", "description": "Number of detected motions.\n" }, "episodes_count": { "type": "integer", "description": "Number of collected episodes.\n" }, "errors_not_authorized_count": { "type": "integer", "description": "Number of not authorized requests\n" }, "errors_url_unreachable_count": { "type": "integer", "description": "Number of failed requests because of bad url or network issues.\n" }, "errors_broken_payload": { "type": "integer", "description": "Number of responses with broken content.\n" }, "errors_no_agent_connected": { "type": "integer", "description": "Number of failed request attempts because of no agent connected.\n" }, "errors_no_service_count": { "type": "integer", "description": "Number of attempts to request disabled or unsupported ONVIF service \n" }, "errors_incorrect_time_values_count": { "type": "integer", "description": "`ONVIF Event Handling Test Specification` says that valid values for `CurrentTime` and `TerminationTime` are \n`TerminationTime >= CurrentTime + InitialTerminationTime`.\n\nHow many responses did not met the condition.\n" } } }, "input_sdi_counters": { "type": "object", "description": "SDI,HDMI and other raw input counters", "properties": { "errors_no_signal": { "type": "integer", "description": "Frames dropped due to 'No signal'." }, "errors_duplicate": { "type": "integer", "description": "Frame data is duplicated from previous frame because the input was too slow." }, "errors_ts_duplicate": { "type": "integer", "description": "Frame time is the same as the previous frame." }, "errors_cpu_stall": { "type": "integer", "description": "The frame was dropped due to too high CPU load." }, "peak_duration_deviation": { "type": "integer", "description": "Gauge of maximum deviation from the estimated frame duration." }, "avg_recv_duration": { "type": "integer", "description": "Gauge of average duration of incoming frame calculated in real time." }, "error_lost_audio": { "type": "integer", "description": "Counter of configured audio sdi channels without samples or non valid." } } }, "input_agent_counters": { "type": "object", "description": "Agent counters", "properties": { "errors_conn_failed": { "type": "integer", "description": "The agent was unable to open the requested connection. These errors may indicate problems opening the TCP socket on the agent or the remote host is unreacheable." }, "errors_out_of_memory": { "type": "integer", "description": "These errors indicate that the agent does not have enough memory to establish a connection to the remote host." }, "errors_buffer_overrun": { "type": "integer", "description": "These errors indicate that the agent does not have enough buffer size to handle outgoing traffic." }, "errors_invalid_request": { "type": "integer", "description": "These errors indicate that the agent is receiving invalid requests." }, "errors_unknown": { "type": "integer", "description": "Unknown errors counter." } } }, "rtp_counters_base": { "type": "object", "required": [ "channel_id" ], "properties": { "channel_id": { "description": "RTP channel number\n", "type": "integer", "example": 0 }, "content": { "description": "Content of the track transmitted in the channel\n", "type": "string", "example": "video" }, "rtp_packets": { "type": "integer", "description": "How many RTP packets received for this channel\n" }, "rtcp_packets": { "type": "integer", "description": "How many RTCP packets received for this channel\n" }, "bytes": { "type": "integer", "description": "How many bytes received for this channel\n" }, "frames": { "type": "integer", "description": "How many frames received for this channel\n" }, "pt_reject_count": { "type": "integer", "description": "Number of rtp packets rejected due to wrong payload type\n" }, "pt_reject_sum": { "type": "integer", "description": "Total size of rejected packets due to wrong payload type (pt_reject_count) rtp packets\n" }, "ts_goes_backwards": { "type": "integer", "description": "Time on this channel is jumped back from reference wallclock.\n" }, "ts_jump_forward": { "type": "integer", "description": "Time on this channel is jumped forward from reference wallclock.\n" }, "ts_stuck": { "type": "integer", "description": "https://datatracker.ietf.org/doc/html/rfc6184#section-4.1\n\naccess unit: A set of NAL units always containing a primary coded picture. In addition to the primary coded\npicture, an access unit may also contain one or more redundant coded pictures or other NAL units not containing\nslices or slice data partitions of a coded picture. The decoding of an access unit always results in a\ndecoded picture.\n\nThere is `marker bit` in RTP packet which is set for the very last packet of the access unit indicated by the RTP timestamp.\n\nIt is protocol violation if received RTP packet has the same timestamp as previous marker bit packet.\n\nThis counter is a number of RTP packets which `RTP timestamp` is equal to previous RTP marker bit packet.\n" }, "errors_dts_stuck": { "type": "integer", "description": "Number of frames which dts is same as previous frame dts.\n" }, "sr_ts_stuck": { "type": "integer", "description": "Number of rtcp SR packets which `RTP timestamp` is equal to the previous rtcp SR packet `RTP timestamp`. \n" }, "sender_clock_deviation": { "type": "integer", "description": "Sender wallclock deviation from server time in ms. Positive value means that sender time is ahead of server time.\n" }, "marker_packets_count": { "type": "integer", "description": "Number of RTP packets which marker bit is set to one.\n" }, "no_marker_mode_flag": { "type": "boolean", "description": "If no marker bit packet is received after 400 RTP packets then decoder switches to `no_marker_mode` and\nmakes frame on each timecode change. \n\nThis flag shows if decoder works in `no_marker_mode`.\n" }, "errors_broken_payload": { "type": "integer", "description": "Demultiplexing was done right, but content is broken.\n" }, "errors_lost_packets": { "type": "integer", "description": "RTP have enough information to tell how many packets were lost\n" } } }, "h26x_decoder_counters": { "type": "object", "description": "Here are counters for h264/h265 decoder.\n", "properties": { "nal_count": { "type": "integer", "description": "How many NAL units handled by this decoder.\n" }, "discarded_broken_nal_count": { "type": "integer", "description": "Number of NAL units, which `forbidden_zero_bit` is set to one.\n" }, "discarded_not_allowed_nal_count": { "type": "integer", "description": "Number of NAL units, which type is not allowed in `non-interleaved packetization mode`.\n" }, "nal_fu_count": { "type": "integer", "description": "`Fragmentation Unit` used to fragment a single NAL unit over multiple RTP packets.\n`H.264` uses `FU-A` NAL. `H.265` has its own fragmentation unit.\n\nThis counter shows how many `Fragmentation Units` handled by this decoder. \n" }, "nal_stap_a_count": { "type": "integer", "description": "How many NAL `STAP_A` units handled by this decoder.\n" }, "nal_aggregation_count": { "type": "integer", "description": "How many NAL `AGGREGATION` units handled by this decoder.\n" }, "fu_pattern_is_broken_count": { "type": "integer", "description": "`Fragmentation Unit` used to fragment a single NAL unit over multiple RTP packets.\n`H.264` uses `FU-A` NAL. `H.265` has its own fragmentation unit.\n\n`Fragmentation Units` pattern must have a `Start FU`, `End FU` and could have `FUs` between these ones. \n\nThis counter indicates how many times pattern was broken.\n" }, "fu_has_both_start_end_bits_count": { "type": "integer", "description": "`Fragmentation Unit` used to fragment a single NAL unit over multiple RTP packets.\n`H.264` uses `FU-A` NAL. `H.265` has its own fragmentation unit.\n\nThis counter shows number of `Fragmentation Units` which `Start bit` and `End bit` are set to one in the same `FU` header\n" }, "incomplete_nal_count": { "type": "integer", "description": "NAL deframentation could be interrupted by unexpected NAL or broken/incomplete packet.\nIf NAL deframentation is interrupted then incomplete fragment of NAL is not discarded and used in decoding process.\n\nThis counter indicates how many incomplete NALs were used. \n" }, "discarded_fu_count": { "type": "integer", "description": "`Fragmentation Unit` used to fragment a single NAL unit over multiple RTP packets.\n`H.264` uses `FU-A` NAL. `H.265` has its own fragmentation unit.\n\nThis counter shows number of discarded `Fragmentation Units`\n" }, "fu_end_then_middle_workaround_count": { "type": "integer", "description": "There is workaround to not interrupt `FU` sequence if `end-FU` followed by `middle-FU`. \nThis counter shows how many time the workaround was applied.\n" }, "nal_sei_count": { "type": "integer", "description": "How many NAL `SEI` units handled by this decoder.\n" }, "invalid_sei_type_count": { "type": "integer", "description": "Number of `SEI` NAL units with invalid type\n" }, "invalid_sei_size_count": { "type": "integer", "description": "Number of `SEI` NAL units with invalid size\n" }, "invalid_sei_payload_count": { "type": "integer", "description": "Number of `SEI` NAL units with bad payload\n" }, "discarded_sei_count": { "type": "integer", "description": "Number of discarded `SEI` NAL units\n" }, "nal_idr_count": { "type": "integer", "description": "How many NAL `IDR` units handled by this decoder.\n" }, "nal_single_count": { "type": "integer", "description": "How many NAL `SINGLE` units handled by this decoder. \n" }, "nal_sps_count": { "type": "integer", "description": "How many NAL `SPS` units handled by this decoder.\n" }, "nal_pps_count": { "type": "integer", "description": "How many NAL `PPS` units handled by this decoder.\n" }, "nal_aud_count": { "type": "integer", "description": "How many NAL `AUD` units handled by this decoder.\n" }, "nal_filler_count": { "type": "integer", "description": "How many NAL `FILLER` units handled by this decoder.\n" }, "nal_slice_count": { "type": "integer", "description": "How many NAL `SLICE` units handled by this decoder.\n" }, "nal_vps_count": { "type": "integer", "description": "How many NAL `VPS` units handled by this decoder.\n" }, "nal_other_count": { "type": "integer", "description": "How many other NAL units handled by this decoder.\n" }, "discarded_nal_count": { "type": "integer", "description": "Number of discarded NAL units.\n" } } }, "segment_duration": { "type": "integer", "format": "milliseconds", "minimum": 1000, "maximum": 15000, "x-format-description": "milliseconds" }, "stream_push": { "oneOf": [ { "$ref": "#/components/schemas/stream_push_rtmp", "x-api-allow": [ "watcher-admin" ] }, { "$ref": "#/components/schemas/stream_push_udp" }, { "$ref": "#/components/schemas/stream_push_m4f" }, { "$ref": "#/components/schemas/stream_push_m4s" }, { "$ref": "#/components/schemas/stream_push_decklink" }, { "$ref": "#/components/schemas/stream_push_dektec" }, { "$ref": "#/components/schemas/stream_push_dektec_asi" }, { "$ref": "#/components/schemas/stream_push_tshttp" }, { "$ref": "#/components/schemas/stream_push_hls" }, { "$ref": "#/components/schemas/stream_push_srt" }, { "$ref": "#/components/schemas/stream_push_st2110" } ], "x-pattern-discriminator": "url" }, "stream_push_base": { "type": "object", "properties": { "comment": { "description": "Human-readable description of the pusher.\n", "type": "string", "example": "This is a test push", "x-api-allow": [ "watcher-admin" ] }, "stats": { "description": "Detailed runtime information about the push.", "allOf": [ { "$ref": "#/components/schemas/push_counters" } ], "readOnly": true, "x-api-allow": [ "watcher-admin" ] }, "retry_limit": { "description": "The maximum number of times *Flussonic* retries to push the stream.", "type": "integer", "x-api-allow": [ "watcher-admin" ] }, "retry_timeout": { "description": "How often *Flussonic* should retry attempts to send the stream, e.g., if it has become offline. \nIt is an interval in seconds, 5 seconds by default. \nYou can increase this value to reduce server load.\n", "type": "integer", "format": "seconds", "example": 7, "x-api-allow": [ "watcher-admin" ], "x-format-description": "seconds" }, "timeout": { "description": "Time interval, in seconds, after which the pusher is stopped if the source stream or publishing is stopped.\n", "type": "integer", "format": "seconds", "example": 10, "x-api-allow": [ "watcher-admin" ], "x-format-description": "seconds" }, "connect_timeout": { "description": "Connection timeout, in seconds. Equals to 0 by default.", "type": "integer", "format": "seconds", "example": 2, "x-format-description": "seconds" }, "disabled": { "description": "Disable pushing the stream.\n\nTemporary disabling, or pausing, an offline stream eliminates the necessity to remove it from the the configuration in order to stop Flussonic trying to push it. \nIn this way, the URL and other settings of a disabled stream remain in Flussonic.\n", "type": "boolean", "x-api-allow": [ "watcher-admin" ] } } }, "stream_push_rtmp": { "allOf": [ { "type": "object", "title": "RTMP", "required": [ "url" ], "properties": { "url": { "description": "RTMP URL where to push.\nYou can publish to RTMP servers. Usually it is a social network streaming.\n", "type": "string", "x-api-allow": [ "watcher-admin" ], "format": "input_url", "examples": { "default": { "value": "rtmp://your-server.com/app/stream1" } }, "pattern": "^rtmps?://.*$", "x-format-description": "input_url" }, "service": { "description": "The name of the service.\nThe value will be sent within FlashVer string when establishing a connection.\nString template is `FMLE/3.0 (compatible; #{encoder}; Streamer #{streamer_version}; #{service}`.\nExample of the resulting FlashVer string is `FMLE/3.0 (compatible; Lavf56.40.101; Streamer 25.01; My service)`.\n", "type": "string", "example": "My service" }, "domain": { "description": "Service public domain name.\nThe value will be sent within notify message with command name 'onMetaData'\nMetadata also will contain the name `Streamer`, streamer version, the type and version of the operating system.\nMetadata will be sent as map with associated map key `yt_project`.\nString template is `Streamer #{streamer_version} #{encoder} #{os_type} #{os_version} #{domain}`.\nExample of the resulting string is `{\"yt_project\" : \"Streamer 25.01 Lavf56.40.101 unix-linux 6.1.0 officialdomain.com\"}`.\n", "type": "string", "example": "officialdomain.com" }, "encoder": { "description": "The name of the encoder used by the pusher. Can also be used as a device name.\nThe value will be sent within notify message with command name 'onMetaData' and within FlashVer string (see above).\n", "type": "string", "example": "Lavf57" } } }, { "$ref": "#/components/schemas/stream_push_base" } ] }, "stream_push_udp": { "allOf": [ { "type": "object", "title": "Multicast MPEG-TS", "required": [ "url" ], "properties": { "url": { "description": "UDP URL of multicast group\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "udp://239.0.0.1:1234" }, "interface": { "value": "udp://eth0@239.0.0.1:1234" }, "bind_ip": { "value": "udp://239.0.0.1:1234/192.168.20.24" } }, "pattern": "^udp[12]?://([^@]+\\@)?[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+:[0-9]+.*$", "x-format-description": "input_url" } } }, { "$ref": "#/components/schemas/stream_push_base" }, { "$ref": "#/components/schemas/stream_push_udp_base" }, { "$ref": "#/components/schemas/stream_push_mpegts_base" } ] }, "stream_push_m4f": { "allOf": [ { "type": "object", "title": "M4F", "required": [ "url" ], "properties": { "url": { "description": "Another Flussonic URL where to push video to.\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "m4f://your-server.com/app/stream1" } }, "pattern": "^m4fs?://.*$", "x-format-description": "input_url" } } }, { "$ref": "#/components/schemas/stream_push_base" } ] }, "stream_push_m4s": { "allOf": [ { "type": "object", "title": "M4S", "required": [ "url" ], "properties": { "url": { "description": "Flussonic stream URL where to push to.\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "m4s://your-server.com/app/stream1" } }, "pattern": "^m4ss?://.*$", "x-format-description": "input_url" } } }, { "$ref": "#/components/schemas/stream_push_base" } ] }, "stream_push_decklink": { "allOf": [ { "type": "object", "title": "Decklink SDI", "required": [ "url" ], "properties": { "url": { "description": "Specify Blackmagic Decklink SDI card as a destination for this stream.\n\nYou need to specify exact number of output, refer to decklink manual to find\nenumeration rules.\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "decklink://0" } }, "pattern": "^decklink://[0-9]+$", "x-format-description": "input_url" } } }, { "$ref": "#/components/schemas/stream_push_base" }, { "$ref": "#/components/schemas/stream_push_sdi_base" } ] }, "stream_push_dektec": { "allOf": [ { "type": "object", "title": "Dektec SDI", "required": [ "url" ], "properties": { "url": { "description": "Select which Dektec SDI card to use as a sink for this stream.\n\nDektec url is combined of card serial # and number of output port on this card.\n\nOutput ports on a card are numbered starting from 1.\nSerial numbers are uniq for each produced card. Take a look at admin UI or use\nnative dektec tools to find the serial number.\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "dektec://2174223350:1" } }, "pattern": "^dektec://[0-9]+:[0-9]+$", "x-format-description": "input_url" }, "push_audio_tracks": { "description": "Configuration of an audio track push to DekTec SDI.", "items": { "allOf": [ { "$ref": "#/components/schemas/push_audio_track" } ] }, "type": "array", "x-private": true }, "genlock": { "description": "Enable clock-lock feature (if supported). See also genlock_status property in pusher stats.", "type": "boolean" }, "pixel_offset": { "description": "Adjusting genlock timing pixel offset.", "type": "integer" } } }, { "$ref": "#/components/schemas/stream_push_base" }, { "$ref": "#/components/schemas/stream_push_sdi_base" } ] }, "stream_push_dektec_asi": { "allOf": [ { "type": "object", "title": "Dektec ASI", "required": [ "url" ], "properties": { "url": { "description": "Select which Dektec ASI card to use as a sink for this stream.\n\nDektec url is combined of card serial # and number of output port on this card.\n\nOutput ports on a card are numbered starting from 1.\nSerial numbers are uniq for each produced card. Take a look at admin UI or use\nnative dektec tools to find the serial number.\n\nMention that ASI is a MPEG-TS transport\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "dektec-asi://" } }, "pattern": "^dektec-asi://.*$", "x-format-description": "input_url" } } }, { "$ref": "#/components/schemas/stream_push_base" }, { "$ref": "#/components/schemas/stream_push_mpegts_base" } ] }, "stream_push_tshttp": { "allOf": [ { "type": "object", "title": "HTTP MPEG-TS", "required": [ "url" ], "properties": { "url": { "description": "Content will be similar to multicast MPEG-TS, but endless HTTP POST will be used to upload content.\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "tshttp://your-server.com/app/stream1" }, "https": { "value": "tshttps://your-server.com/app/stream1" } }, "pattern": "^tshttps?://.*$", "x-format-description": "input_url" } } }, { "$ref": "#/components/schemas/stream_push_base" }, { "$ref": "#/components/schemas/stream_push_mpegts_base" } ] }, "stream_push_hls": { "allOf": [ { "type": "object", "title": "HLS", "required": [ "url" ], "properties": { "url": { "description": "It is possible to publish HLS to a CDN. Segments will be uploaded together with manifests.\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "hls://your-server.com/app/stream1" } }, "pattern": "^hlss?://.*$", "x-format-description": "input_url" } } }, { "$ref": "#/components/schemas/stream_push_base" } ] }, "stream_push_srt": { "allOf": [ { "type": "object", "title": "SRT", "required": [ "url" ], "properties": { "url": { "description": "SRT URL where to push video.\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "srt://my-server.com:8994" } }, "pattern": "^srt[12]?://[^:]+:[0-9]+.*$", "x-format-description": "input_url" } } }, { "$ref": "#/components/schemas/stream_push_base" }, { "$ref": "#/components/schemas/stream_push_udp_base" }, { "$ref": "#/components/schemas/stream_push_mpegts_base" }, { "$ref": "#/components/schemas/srt_config_base" } ] }, "stream_push_udp_base": { "type": "object", "properties": { "multicast_loop": { "description": "Whether to capture multicast back to the *Flussonic* host. \nThis option allows you to ingest the sent stream on the sending host by *Flussonic* or other application.\nSet to `True` for a UDP MPEG-TS push.\n", "type": "boolean", "example": true }, "standby": { "description": "Monitoring multicast group and stop pushing if another publisher presented", "type": "boolean" }, "v": { "description": "This option allows to enable non-default, probably experimental, version of udp pusher.", "x-private": true, "type": "string", "enum": [ "timed" ] }, "bind_to_core": { "description": "Timed pusher option. CPU core to bind sending thread to. Will not bind by default.", "x-private": true, "type": "integer" }, "timed_wait": { "description": "Timed pusher option. How to wait to send a packet.", "x-private": true, "type": "string", "enum": [ "sleep", "busy" ] } } }, "stream_push_mpegts_base": { "type": "object", "properties": { "vb": { "description": "Average bitrate per second that you can send of a video track, including all the headers and encapsulation in the transport stream. \nSo, for example, the value of vb=2720 approximately corresponds to the bitrate 2600 specified in the transcoder settings.\n", "type": "integer", "format": "speed", "example": 2720, "x-format-description": "speed" }, "bitrate": { "description": "The bitrate of the whole stream.", "type": "integer", "format": "speed", "example": 3200, "x-format-description": "speed" }, "pnr": { "description": "Program number in the outgoing MPEG-TS stream. A program may represent a television channel.\n", "type": "integer" }, "pids": { "description": "This parameter sets PIDs values for outgoing MPEG-TS streams. \nIt is possible to set PID values for PMT, STD and video and audio tracks.\n", "allOf": [ { "$ref": "#/components/schemas/output_mpegts_pids" } ] }, "mpegts_ac3": { "description": "It allows to specify pack information about ac3 for outgoing MPEGTS-TS streams. The default value is `system_b`.", "allOf": [ { "$ref": "#/components/schemas/output_mpegts_ac3" } ] }, "service": { "description": "Service name. Used to fill in the field service name within SDT MPEG-TS table.\n", "type": "string", "example": "My service name" }, "provider": { "description": "Provider name. Used to fill in the field service provider within SDT MPEG-TS table.\n", "type": "string", "example": "My provider name" } } }, "stream_push_sdi_base": { "type": "object", "properties": { "volume": { "description": "Audio volume coefficient.\nThe output audio volume is given by the relation: `output_volume = volume * input_volume`.\nThe maximum volume value is 1.0 (default value).\n", "type": "number", "example": 0.5 }, "deinterlace": { "description": "Activate deinterlacing, i.e., converting an interlaced image to a progressive image. \nIt is necessary for comfortable viewing of legacy TV video on PC/mobile devices.\n", "type": "boolean" }, "video_format": { "description": "Specify SDI/HDMI output format", "anyOf": [ { "$ref": "#/components/schemas/video_format" } ] }, "vbi_lines": { "description": "Lines of VBI (vertical blanking interval) of an output analog stream that will contain teletext.\nIt is used for passing teletext from MPEG-TS to analog streams.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/push-teletext-to-sdi-vbi/" }, "items": { "allOf": [ { "$ref": "#/components/schemas/vbi_lines" } ] }, "type": "array" }, "dthreads": { "description": "Defines a number of threads in a decoder. \nOne thread uses one core. \nThe default value equals 4, but you can set it equal to the number of cores of your CPU.\n", "type": "integer", "x-notice": "video decoder threads number" }, "scale": { "description": "Defines a scaling algorithm.\nYou can choose only one algorithm at a time. \nIf a pushing stream has the same video resolution as an ingest stream, the `fast_bilinear` algorithm is used by default. \nIf a video resolution of a pushing stream does not equal the video resolution of an ingest stream, the bicubic algorithm is used by default. \nIf the algorithm is specified explicitly, it applies to all the formats.\n", "allOf": [ { "$ref": "#/components/schemas/scale_algorithm" } ] } } }, "stream_push_st2110": { "allOf": [ { "type": "object", "title": "SMPTE 2110", "required": [ "url" ], "properties": { "url": { "description": "UDP URL of multicast group\n", "type": "string", "format": "input_url", "examples": { "default": { "value": "st2110://239.0.0.1:1234" }, "interface": { "value": "st2110://eth0@239.0.0.1:1234" }, "bind_ip": { "value": "st2110://239.0.0.1:1234/192.168.20.24" } }, "pattern": "^st2110?://([^@]+\\@)?[0-9]+\\.[0-9]+\\.[0-9]+\\.[0-9]+:[0-9]+.*$", "x-format-description": "input_url" } } }, { "$ref": "#/components/schemas/stream_push_base" } ] }, "stream_input": { "oneOf": [ { "$ref": "#/components/schemas/stream_input_fake" }, { "$ref": "#/components/schemas/stream_input_file" }, { "$ref": "#/components/schemas/stream_input_h323" }, { "$ref": "#/components/schemas/stream_input_hls" }, { "$ref": "#/components/schemas/stream_input_rtmp" }, { "$ref": "#/components/schemas/stream_input_rtsp" }, { "$ref": "#/components/schemas/stream_input_srt" }, { "$ref": "#/components/schemas/stream_input_tshttp" }, { "$ref": "#/components/schemas/stream_input_mixer" }, { "$ref": "#/components/schemas/stream_input_mosaic" }, { "$ref": "#/components/schemas/stream_input_m4f" }, { "$ref": "#/components/schemas/stream_input_m4s" }, { "$ref": "#/components/schemas/stream_input_rtp" }, { "$ref": "#/components/schemas/stream_input_shoutcast" }, { "$ref": "#/components/schemas/stream_input_timeshift" }, { "$ref": "#/components/schemas/stream_input_playlist" }, { "$ref": "#/components/schemas/stream_input_copy" }, { "$ref": "#/components/schemas/stream_input_spts" }, { "$ref": "#/components/schemas/stream_input_mpts" }, { "$ref": "#/components/schemas/stream_input_publish" }, { "$ref": "#/components/schemas/stream_input_v4l" }, { "$ref": "#/components/schemas/stream_input_decklink" }, { "$ref": "#/components/schemas/stream_input_dektec" }, { "$ref": "#/components/schemas/stream_input_external" }, { "$ref": "#/components/schemas/stream_input_ndi" }, { "$ref": "#/components/schemas/stream_input_st2110" }, { "$ref": "#/components/schemas/stream_input_frip" } ], "x-pattern-discriminator": "url" }, "stream_input_base": { "type": "object", "properties": { "comment": { "description": "Human-readable description of the input.\n", "type": "string", "example": "This is a test input" }, "source_timeout": { "description": "The period of time, in seconds, for which Media Server will wait for new frames until it considers the source as lost.", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/add-secondary-source-for-redundancy/#source_timeout" }, "anyOf": [ { "type": "integer", "format": "seconds", "x-format-description": "seconds" }, { "enum": [ false ], "type": "boolean" } ], "example": 20, "x-api-allow": [ "watcher-core" ] }, "audio_timeout": { "description": "The period of time, in seconds, for which Media Server will wait for new audio frames until it considers the source as lost.", "externalDocs": { "description": "Find more information here" }, "type": "integer", "format": "seconds", "example": 20, "x-api-allow": [ "watcher-core" ], "x-format-description": "seconds" }, "video_timeout": { "description": "The period of time, in seconds, for which Media Server will wait for new video frames until it considers the source as lost.", "type": "integer", "format": "seconds", "example": 20, "x-api-allow": [ "watcher-core" ], "x-format-description": "seconds" }, "max_retry_timeout": { "description": "The maximum time that Media Server will set for attempts to reconnect to source when source problems occur.\nThe time between attempts is not linear and may increase if source problems are not fixed. This parameter limits this value, but the time itself between attempts may be longer.\n", "type": "integer", "format": "seconds", "example": 30, "minimum": 1, "x-api-allow": [ "watcher-core" ], "x-format-description": "seconds" }, "timeout": { "description": "The time, in seconds, for Media Server to switch to the fallback source if the main source stops sending frames (video or audio). \nThe important thing here is that the source remains active (connected), allowing for a client-publisher to stay on the socket.\n", "type": "integer", "example": 10, "x-api-allow": [ "watcher-core" ] }, "frames_timeout": { "description": "Specifies the period of time, in seconds, for which Media Server waits for new frames to come from the data source before it generates the `frames_timed_out` event that informs you that the source might soon be lost. \nThis period of time must be smaller than `source_timeout`. \nIf frames come again from this source, before `source_timeout` has passed, Media Server issues the `frames_restored` event.\n", "type": "integer", "example": 3, "x-api-allow": [ "watcher-core" ] }, "priority": { "description": "The priority that Media Server takes into account when switching to another source.\nThe source with `priority=1` has the first priority, the source with `priority=2` has the second priority, and so on.\n\nBy default, the first source in the list has the highest priority and the last source in the list has the lowest priority. \nIf priority is not specified for some sources, or if some sources have equal priorities, then the default order is applied. \n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/add-secondary-source-for-redundancy/#priority" }, "type": "integer", "example": 1, "x-api-allow": [ "watcher-core" ] }, "stats": { "description": "Detailed runtime information about the input.", "allOf": [ { "$ref": "#/components/schemas/input_stats" } ], "readOnly": true, "x-api-allow": [ "watcher-core" ] }, "user_agent": { "type": "string", "description": "User agent. Can be modified if a protocol allows it.", "x-api-allow": [ "watcher-core" ] }, "via": { "type": "string", "format": "agent_url", "description": "Agent ID. Used as a proxy to connect to the input server.", "x-api-allow": [ "watcher-core", "central-layouter" ], "x-format-description": "agent://ID identification for `via` configuration option\n" }, "output_audio": { "description": "Enables transcoding of the published audio to another codec.\nThe option is useful when you want to get an AAC audio track from WebRTC publish with OPUS or RTSP camera with PCMU.\n", "allOf": [ { "$ref": "#/components/schemas/output_audio" } ], "x-api-allow": [ "watcher-core" ] }, "headers": { "additionalProperties": { "type": "string" }, "type": "object", "description": "Request headers as key-value pairs.", "example": { "User-Agent": "curl/7.85.0", "Authorization": "Basic dXNlcjpwYXNzd29yZA==" }, "x-api-allow": [ "watcher-core" ] }, "no_clients_reconnect_delay": { "type": "integer", "description": "Skip input start if the stream has no clients." }, "allow_if": { "type": "string", "description": "Path to a file. The input will be allowed if you put `1` in the file, or denied if `0` (reverse logic to `deny_if`).\nThis option allows you to manage inputs without API requests.\n\nFor example, your stream has two inputs and you set `allow_if = /path/to/file` for the first input.\nThe `/path/to/file` file contains only the digit `1`. That means that the first input is used when you play the stream.\nWhen you put `0` to the `/path/to/file` file, the first input is denied, so the second one is played.\n\nIf no such file, the input is allowed.\n" }, "deny_if": { "type": "string", "description": "Path to a file. The input will be denied if you put `1` in the file, or allowed if `0` (reverse logic to `allow_if`).\nThis option allows you to manage inputs without API requests.\n\nFor example, your stream has two inputs and you set `deny_if = /path/to/file` for the first input.\nThe `/path/to/file` file contains only the digit `1`. \nThat means that the first input will not be used when you play the stream, so the second one will.\nWhen you put `0` to the `/path/to/file` file, the first input is allowed to be played.\n\nIf no such file, the input is allowed.\n" }, "bind_ip": { "type": "string", "x-private": true, "description": "Interface ip address, to bind socket to." }, "mbr": { "x-private": true, "type": "string", "deprecated": true, "description": "Enables the multi-bitrate mode for transcoding the input.\nNeed to remove it, but must offer some replacement to Watcher\n", "x-api-allow": [ "watcher-core" ] } } }, "stream_input_fake": { "allOf": [ { "type": "object", "title": "Demo source", "properties": { "url": { "description": "URL to get a demo stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "fake://fake", "pattern": "^fake://.*$", "x-format-description": "input_url" }, "width": { "type": "integer", "description": "Width of an artificially created test video stream. \nApplicable to the `fake://fake` URL.\n" }, "height": { "type": "integer", "description": "Height of an artificially created test video stream. \nApplicable to the `fake://fake` URL.\n" }, "bitrate": { "type": "integer", "format": "speed", "description": "Bitrate of an artificially created test video stream. \nApplicable to the `fake://fake` URL.\n", "x-format-description": "speed" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_file": { "allOf": [ { "type": "object", "title": "File", "properties": { "url": { "description": "URL to get a stream from file.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "file://vod/bunny.mp4", "pattern": "^file://.*$", "x-format-description": "input_url" }, "raw": { "x-private": true, "description": "If this option is enabled, file source produce raw stream.", "type": "boolean" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_mpegts_specific" }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_h323": { "allOf": [ { "type": "object", "title": "H323", "properties": { "url": { "description": "URL to connect to the H323 source and get the stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "h323://192.168.100.150", "pattern": "^h323://.*$", "x-format-description": "input_url" }, "id": { "type": "string", "description": "H323 input ID." }, "video_bitrate": { "type": "integer", "format": "speed", "description": "H323 input bitrate.", "x-format-description": "speed" }, "audio_bitrate": { "type": "integer", "format": "speed", "description": "H323 audio bitrate.", "x-format-description": "speed" }, "connections": { "type": "integer", "description": "H323 connections." } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_hls": { "allOf": [ { "type": "object", "title": "HLS", "properties": { "url": { "description": "URL to get a stream from HLS source.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "hls://remote.host.com/example/video.m3u8" }, "apple_standard": { "value": "hls://remote.host.com/example/index.m3u8" }, "secure": { "value": "hlss://remote.host.com/example/video.m3u8" }, "hls2": { "value": "hls2://remote.host.com/example/video.m3u8" }, "hlss2": { "value": "hlss2://remote.host.com/example/video.m3u8" }, "http": { "value": "http://remote.host.com/index.m3u8" }, "https": { "value": "https://remote.host.com/index.m3u8" } }, "pattern": "^(hls|hlss|hls2|hlss2)://.*$|^(http|https)://.*\\.m3u8((#|\\?).*)?$", "x-format-description": "input_url" }, "skip_stalled_check": { "description": "By default Flussonic will wait for at least 2-3 new segments before making stream available.\n\nThis parameter allows to disable this protection. Use it at your own risk - with it enabled, old content might be repeated over and over.\n", "type": "boolean" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_mpegts_specific" }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_rtmp": { "allOf": [ { "type": "object", "title": "RTMP", "properties": { "url": { "description": "URL to connect to the RTMP source and get the stream.\n\nRTMP uses a special URL consisting of at least two segments. *Flussonic* parses the URL and splits it into parts, \nusing the first segment as an RTMP application name.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "rtmp://remote.host.com/static/example" }, "secure": { "value": "rtmps://remote.host.com/static/example" } }, "pattern": "^(rtmp|rtmps)://.*$", "x-format-description": "input_url" }, "pageUrl": { "description": "URL of the web page from where the SWF file was loaded. \nThis is an RTMP header (Referer) used for establishing connection.\n", "type": "string", "format": "url", "example": "http://somehost/sample.html", "x-format-description": "url" }, "swfUrl": { "description": "URL of the source SWF file making the connection by RTMP.", "type": "string", "example": "file://C:/FlvPlayer.swf" }, "tcUrl": { "description": "URL of the remote Server for entering credentials. \nIt has the following format: `protocol://servername:port/appName/appInstance`.\n", "type": "string", "format": "url", "example": "rtmp://localhost:1935/testapp/instance1", "x-format-description": "url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_rtsp": { "allOf": [ { "type": "object", "title": "RTSP", "properties": { "url": { "description": "URL to connect to the RTSP source and get the stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "rtsp://remote.host.com/example" }, "secure": { "value": "rtsps://remote.host.com/example" }, "rtsp-udp": { "value": "rtsp-udp://remote.host.com/example" }, "rtsp2": { "value": "rtsp2://remote.host.com/example" } }, "pattern": "^(rtsp|rtsps|rtsp-udp|rtsp2)://.*$", "x-format-description": "input_url" }, "rtp": { "enum": [ "udp" ], "type": "string", "description": "Whether to force UDP to capture a video from RTSP cameras.", "x-api-allow": [ "watcher-core" ] }, "tracks": { "x-private": true, "type": "array", "items": { "type": "integer" }, "description": "List of track numbers to receive when capturing a stream from an RTSP camera.", "example": [ 1 ], "x-api-allow": [ "watcher-core" ] }, "wait_rtcp": { "type": "boolean", "description": "Whether to wait for the full RTP time synchronization before the processing of frames from the RTSP camera.\n", "x-api-allow": [ "watcher-core" ] } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_v4l": { "allOf": [ { "type": "object", "title": "V4L", "required": [ "url" ], "properties": { "url": { "description": "URL to connect to the Video4Linux source and get the stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "v4l2://" }, "v4l": { "value": "v4l://" } }, "pattern": "^(v4l|v4l2)://.*$", "x-format-description": "input_url" }, "audio_device": { "description": "The audio device to capture audio from Stream Labs SDI cards.\n\nThis parameter is specified for ALSA devices in the following format `interface:card,device`.\n", "type": "string", "example": "plughw:1,0" }, "video_device": { "description": "The video device to capture video from Stream Labs SDI cards.\nIt is actually a path to a device file created on the disk by Video4Linux.\n", "type": "string", "example": "/dev/video0" }, "vbi_device": { "description": "The VBI device to capture raw VBI data from Stream Labs SDI cards. VBI data can contain information about teletext or closed captions.\nIt is actually a path to a VBI device file created on the disk by Video4Linux.\n", "type": "string", "example": "/dev/vbi" }, "ttxt_descriptors": { "description": "This information is necessary for adding into the PMT table to identify streams which carry teletext data in the resulting MPEG-TS stream.", "items": { "allOf": [ { "$ref": "#/components/schemas/ttxt_descriptors" } ] }, "type": "array" }, "vbi_threshold": { "description": "This parameter is used for debugging when reading teletext from VBI.\nThis is a threshold, in seconds, for turning on the decoder.\n", "type": "integer" }, "vbi_debug": { "description": "This parameter allows logging the decoded data when reading teletext from VBI.", "type": "boolean" }, "vbi_decoder": { "description": "This parameter is used for debugging when reading teletext from VBI.\nIt allows to specify which decoder is used.\n", "x-private": true, "oneOf": [ { "const": "erl", "description": "The decoder in Erlang is used." }, { "const": "nif", "description": "The decoder in C is used." } ] }, "sample_rate": { "x-private": true, "type": "integer", "description": "The input sample rate." } } }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_decklink": { "allOf": [ { "type": "object", "title": "Decklink SDI", "properties": { "url": { "description": "URL to connect to the Decklink SDI source and get the stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "decklink://0", "pattern": "^decklink://.*$", "x-format-description": "input_url" }, "mode": { "description": "Mode of the input stream captured from the Decklink card. It is composed of the size and FPS of the captured video. \n\nUsually, it is autodetected, but for some Decklink models you'll need to specify it manually.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/ingest-sdi-with-blackmagic/#live-sdi-capture" }, "anyOf": [ { "$ref": "#/components/schemas/bm_display_mode" } ], "example": "4d30" }, "ainput": { "description": "Audio interface for capturing from the Decklink card.\n\nUsually, it is autodetected, but for some Decklink models you should specify it manually.\n", "anyOf": [ { "type": "integer", "enum": [ 1, 2, 3, 4, 5, 6, 7 ] }, { "enum": [ "embedded", "aes_ebu", "analog", "analog_xlr", "analog_rca", "microphone", "headphones" ], "type": "string" } ], "example": "microphone" }, "vinput": { "description": "Video interface for capturing from the Decklink card.\n\nUsually, it is autodetected, but for some Decklink models you should specify it manually.\n", "anyOf": [ { "type": "integer", "enum": [ 1, 2, 3, 4, 5, 6 ] }, { "enum": [ "sdi", "hdmi", "optical_sdi", "component", "composite", "s_video" ], "type": "string" } ], "example": "hdmi" }, "vpts": { "description": "Synchronization mode for video captured from the Decklink card.\n", "oneOf": [ { "const": "audio", "description": "Synchronization by audio." }, { "const": "ref", "description": "Synchronization according to the timing reference signal." }, { "const": 2, "description": "Equivalent of 'audio'." }, { "const": 3, "description": "Equivalent of 'ref'." } ] }, "apts": { "description": "Synchronization mode for audio captured from the Decklink card.\n", "oneOf": [ { "const": "video", "description": "Synchronization by video." }, { "const": "ref", "description": "Synchronization according to the timing reference signal." }, { "const": 1, "description": "Equivalent of 'video'." }, { "const": 3, "description": "Equivalent of 'ref'." } ] }, "pixel": { "description": "Preferred pixel format for captured video.", "oneOf": [ { "const": "rgb8", "description": "rgb color model and 8 bits per pixel." }, { "const": "rgb10", "description": "rgb color model and 10 bits per pixel." }, { "const": "rgb12", "description": "rgb color model and 12 bits per pixel." }, { "const": "yuv8", "description": "YUV color model and 8 bits per pixel." }, { "const": "yuv10", "description": "YUV color model and 10 bits per pixel." }, { "const": "8", "description": "equivalent of `yuv8`." }, { "const": "10", "description": "equivalent of `yuv10`." } ] }, "sar": { "description": "The ratio of the width of the display representation to the width of the pixel representation of video.\n\nThis parameter is used for creating non-anamorphic video from anamorphic video.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder-internals/#transcoder-options_sar" }, "type": "string", "example": "16:9" }, "ttxt_descriptors": { "description": "This information is necessary for adding into the PMT table to identify streams which carry teletext data in the resulting MPEG-TS stream.", "items": { "allOf": [ { "$ref": "#/components/schemas/ttxt_descriptors" } ] }, "type": "array" }, "vbi_threshold": { "description": "This parameter is used for debugging when reading teletext from VBI.\nThis is a threshold, in seconds, for turning on the decoder.\n", "type": "integer" }, "vbi_debug": { "description": "This parameter allows logging the decoded data when reading teletext from VBI.", "type": "boolean" }, "audio_tracks": { "description": "The configuration of an audio track received from DekTec SDI.", "items": { "allOf": [ { "$ref": "#/components/schemas/audio_track" } ] }, "type": "array", "x-private": true } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_dektec": { "allOf": [ { "type": "object", "title": "DekTec SDI", "properties": { "url": { "description": "URL to connect to the DekTec SDI source and get the stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "dektec://2174220025:2", "pattern": "^dektec://.*$", "x-format-description": "input_url" }, "pixel": { "description": "Preferred pixel format for captured video.", "oneOf": [ { "const": "rgb8", "description": "rgb color model and 8 bits per pixel." }, { "const": "rgb10", "description": "rgb color model and 10 bits per pixel." }, { "const": "rgb12", "description": "rgb color model and 12 bits per pixel." }, { "const": "yuv8", "description": "YUV color model and 8 bits per pixel." }, { "const": "yuv10", "description": "YUV color model and 10 bits per pixel." }, { "const": "8", "description": "equivalent of `yuv8`." }, { "const": "10", "description": "equivalent of `yuv10`." } ] }, "sar": { "description": "The ratio of the width of the display representation to the width of the pixel representation of video.\n\nThis parameter is used for creating non-anamorphic video from anamorphic video.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/transcoder/#video-options" }, "type": "string", "example": "16:9" }, "ttxt_descriptors": { "description": "This information is necessary for adding into the PMT table to identify streams which carry teletext data in the resulting MPEG-TS stream.", "items": { "allOf": [ { "$ref": "#/components/schemas/ttxt_descriptors" } ] }, "type": "array" }, "vbi_debug": { "description": "This parameter allows logging the decoded data when reading teletext from VBI.", "type": "boolean" }, "audio_tracks": { "description": "The configuration of an audio track received from DekTec SDI.", "items": { "allOf": [ { "$ref": "#/components/schemas/audio_track" } ] }, "type": "array", "x-private": true }, "scte35": { "description": "This option disables processing of SCTE-35 markers from an MPEG-TS input stream.\nDeprecated since 22.12.\nAvailable ways to disable processing of SCTE-35 markers:\n1. pids option to select tracks without SCTE-35 markers\n2. hls_scte35 option from stream_config_media for hls output\n3. performing appropriate tuning pids in the transponder\n", "type": "boolean", "default": true, "example": true, "deprecated": true, "x-delete-at": 23.09 } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_srt": { "allOf": [ { "type": "object", "title": "SRT", "required": [ "url" ], "properties": { "url": { "description": "Artificial URL to connect to the SRT source and get the stream.\n\nSRT requires IP and port, so we create an artificial URL to specify the options to manage the data interchange.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "srt://remote.host.com:1234?bind_ip=10.77.0.100" }, "srt1": { "value": "srt1://remote.host.com:1234?bind_ip=10.77.0.100" }, "srt2": { "value": "srt2://remote.host.com:1234?bind_ip=10.77.0.100" } }, "pattern": "^(srt|srt1|srt2)://.*$", "x-format-description": "input_url" }, "closed_captions": { "additionalProperties": { "type": "string" }, "type": "object", "x-key-type": "string", "description": "The rules for handling the closed captions.\n" } } }, { "$ref": "#/components/schemas/stream_input_srt_publish_specific" }, { "$ref": "#/components/schemas/stream_input_base" }, { "$ref": "#/components/schemas/srt_config_base" } ] }, "stream_input_srt_publish_specific": { "type": "object", "properties": { "subtitles": { "description": "This configuration is deprecated. Use `dvbocr` configuration field in stream.\n\nThis parameter allows to manage subtitles in an output stream.\n", "oneOf": [ { "const": "drop", "description": "An output stream will have no subtitles track." }, { "const": "accept", "description": "An output stream will have a subtitles track in DVB, without conversion to text (default behavior)." }, { "const": "ocr_replace", "description": "An output stream will have a track containing subtitles converted to a text format (WebVTT)." }, { "const": "ocr_add", "description": "An output stream will have two tracks containing subtitles: \nthe original track with subtitles in DVB and a new track with text subtitles.\n" } ], "example": "drop", "deprecated": true, "x-delete-at": 25.03 }, "scte35": { "description": "This option disables processing of SCTE-35 markers from SRT input stream.\n", "type": "boolean", "default": true, "example": true } } }, "stream_input_tshttp": { "allOf": [ { "type": "object", "title": "TSHTTP", "properties": { "url": { "description": "URL for ingest and pass a stream \"as is\" without repackaging.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "tshttp://ADMIN:PASSWORD@FLUSSONIC_IP/flussonic/api/dvbts/a0" }, "secure": { "value": "tshttps://127.0.0.1:8080" }, "mpegts": { "value": "http://remote.host.com/mpegts" }, "mpegts_secure": { "value": "https://remote.host.com/mpegts" }, "ts": { "value": "http://remote.host.com/example.ts" }, "ts_secure": { "value": "https://remote.host.com/example.ts" } }, "pattern": "^(tshttp|tshttps)://.*$|^(http|https)://.*(\\.ts|/mpegts)$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_mpegts_specific" }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_webrtc_publish_specific": { "type": "object", "title": "WebRTC", "properties": { "prefer_codec": { "description": "Choose one of the listed video codecs at the start of the publication via WebRTC.\n", "deprecated": true, "x-delete-at": 24.11, "x-alias": "prefer_video_codec", "allOf": [ { "$ref": "#/components/schemas/webrtc_prefer_video_codec" } ], "example": "av1" }, "prefer_video_codec": { "description": "Prefer one of the listed video codecs at the start of the publication via WebRTC.\n", "allOf": [ { "$ref": "#/components/schemas/webrtc_prefer_video_codec" } ], "example": "av1" }, "prefer_video_profile": { "description": "Prefer one of the listed video profiles at the start of the publication via WebRTC.\nThis option should help if the client's equipment cannot encode correctly to the automatically selected profile. Use with option if the publication does not work.\n", "type": "string", "example": "42e01f", "x-private": true }, "transport": { "description": "Choose the prefered transport of the publication via WebRTC: UDP or TCP.\n", "allOf": [ { "$ref": "#/components/schemas/webrtc_transport" } ], "example": "udp" }, "min_bitrate": { "description": "The minimum bitrate threshold, in kbit/s. The default value is 100 kbit/s.", "type": "integer", "example": 150 }, "webrtc_abr": { "description": "Whether the adaptive bitrate mechanism is used for WebRTC publications.", "type": "boolean", "example": true }, "abr_stepup": { "description": "Increment step for raising the bitrate to the maximum, in percent. The default step is 30%. \nIf the loss is less than `abr_loss_lower`, Flussonic makes the publisher to step up from the current bitrate to the maximum one with the rate of `abr_stepup percent`.\n", "type": "integer" }, "abr_correction": { "description": "The correction between the target bitrate (Receiver Estimated Maximum Bitrate, calculated in Flussonic) and browser bitrate, in kbit/s.\nFlussonic sends the target bitrate to the browser from which the publication is carried out so that the browser adjusts the bitrate of the publication by this value.\nThe default value is 300 kbit/s.\n", "type": "integer", "example": 200 }, "abr_loss_lower": { "type": "number", "description": "The lower limit of packet loss. When it is reached, Flussonic raises the bitrate. \nThat is, if packet loss is less than `abr_loss_lower`, Flussonic makes the publisher to step up from the current bitrate to the maximum one with the rate of `abr_stepup` percent.\n", "example": 2 }, "abr_loss_upper": { "description": "The upper limit of packet loss. When it is reached, Flussonic reduces the bitrate. \nThat is, if packet loss is greater than `abr_loss_upper`, Flussonic makes the publisher to reduce the current bitrate in steps with the maximum rate of `abr_stepdown` percent.\n", "type": "number", "example": 10 }, "abr_stepdown": { "description": "A step of reducing the bitrate to the minimum. \nIf packet losses are greater than `abr_loss_upper`, Flussonic makes the publisher to reduce the current bitrate in steps with the maximum rate of `abr_stepdown` percent.\n", "type": "number" }, "abr_mode": { "description": "The algorithm for determining the need to change the bitrate of the published stream and for calculating the target bitrate. \nTwo options are possible:\n\n* `abr_mode=0` - This mode takes into account the packet losses, target bitrate, browser bitrate and the number of auto-adjustment cycles.\n* `abr_mode=1` - This mode considers only packet losses and target bitrate.\n", "type": "integer", "example": 1 }, "abr_debug": { "description": "Whether adaptive bitrate process is logged.", "type": "integer", "example": 1 }, "abr_cycles": { "description": "The number of cycles of bitrate auto-adjustment.\nAfter the specified number of auto-adjustment cycles passes, Flussonic considers the bitrate to be optimal, and it is no longer analyzed. \nBy default, `abr_cycles`=5. \nIf `abr_cycles`=0, the adjustment process takes place all the time while the publication lasts.\n", "type": "integer", "example": 3 }, "abr_max_bitrate": { "description": "Maximum bitrate for adjustment process, in kbit/s.\nFlussonic will keep the publication bitrate equal or below of the specified value.\n", "type": "integer", "default": 2500, "example": 1000 } } }, "stream_input_mixer": { "allOf": [ { "type": "object", "title": "Mixer", "properties": { "url": { "description": "URL to make a mixer stream from other streams.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "mixer://stream1,stream2", "pattern": "^mixer://.*$", "x-format-description": "input_url" }, "sync": { "description": "This parameter is used for a mixer stream that uses other streams as its video and audio sources.\n\nIf the parameter is set to `realtime`, audio frames will be played in sync with video frames: \nif the difference between timestamps of an audio frame and a corresponding video frame is more than 2 seconds, \nthe audio frame will be played at the timestamp of the video frame. \n\nIf this parameter is set to `dts`, no synchronization is performed.\n", "enum": [ "dts", "realtime" ], "type": "string", "example": "dts", "default": "dts", "x-api-allow": [ "watcher-core" ] }, "audio_add": { "type": "integer", "description": "Moves audio timestamp forwards or backwards on a specified number of milliseconds.", "deprecated": true, "x-delete-at": 23.09, "format": "milliseconds", "x-alias": "audio_offset", "x-api-allow": [ "watcher-core" ], "x-format-description": "milliseconds" }, "audio_offset": { "type": "integer", "description": "Rename audio_add, worked only for dts sync method", "x-private": true, "format": "milliseconds", "x-format-description": "milliseconds" }, "mixer_strategy": { "description": "The mixing mode for the `mixer://` input type.\n", "oneOf": [ { "const": "all", "description": "Mix all input tracks." }, { "const": "first_video_audio", "description": "Mix only a first video track of the first input with a first audio track of the second input." } ], "default": "first_video_audio", "type": "string" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_timeshift": { "allOf": [ { "type": "object", "title": "Timeshift", "properties": { "url": { "description": "Special URL to play the archive record of a stream with a fixed delay.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "timeshift://channel/7200" } }, "pattern": "^timeshift://.*$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_m4f": { "allOf": [ { "type": "object", "title": "M4F", "properties": { "url": { "description": "URL to get a stream from m4f source.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "m4f://remote.host.com/example" }, "secure": { "value": "m4fs://remote.host.com/example" } }, "pattern": "^(m4f|m4fs)://.*$", "x-format-description": "input_url" }, "closed_captions": { "additionalProperties": { "type": "string" }, "type": "object", "x-key-type": "string", "description": "The rules for handling the closed captions.\n" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_copy": { "allOf": [ { "type": "object", "title": "Copy source", "properties": { "url": { "description": "URL to connect to the source and get a copy of the original stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "copy://stream1", "pattern": "^copy://.*$", "x-format-description": "input_url" }, "closed_captions": { "additionalProperties": { "type": "string" }, "type": "object", "x-key-type": "string", "description": "The rules for handling the closed captions.\n" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_shoutcast": { "allOf": [ { "type": "object", "title": "SHOUTcast", "properties": { "url": { "description": "URL to connect to the SHOUTcast source and get the stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "shoutcast://remote.host.com/example/shoutcast" }, "secure": { "value": "shoutcasts://remote.host.com/example/shoutcast" } }, "pattern": "^(shoutcast|shoutcasts)://.*$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_rtp": { "allOf": [ { "type": "object", "title": "RTP", "properties": { "url": { "description": "URL to connect to RTP source and get the stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "rtp://remote.host.com", "pattern": "^rtp://.*$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_m4s": { "allOf": [ { "type": "object", "title": "M4S", "properties": { "url": { "description": "URL to get a stream from m4s source.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "m4s://remote.host.com/example" }, "secure": { "value": "m4ss://remote.host.com/example" } }, "pattern": "^(m4s|m4ss)://.*$", "x-format-description": "input_url" }, "closed_captions": { "additionalProperties": { "type": "string" }, "type": "object", "x-key-type": "string", "description": "The rules for handling the closed captions.\n" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_mosaic": { "allOf": [ { "type": "object", "title": "Mosaic", "properties": { "url": { "description": "Special URL to make a mosaic stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "mosaic://cam1,cam2,cam3,cam4?fps=20&preset=ultrafast&bitrate=1024k&size=340x240&mosaic_size=16" }, "mosaic2": { "value": "mosaic2://" } }, "pattern": "^(mosaic|mosaic2)://.*$", "x-format-description": "input_url" }, "disable_video": { "x-private": true, "type": "boolean", "description": "Whether to show video from streams included into the mosaic." }, "samples": { "x-private": true, "type": "integer", "description": "The input samples." }, "sample_rate": { "x-private": true, "type": "integer", "description": "The input sample rate." }, "bitrate": { "type": "integer", "format": "speed", "description": "Bitrate of the audio. \n", "x-format-description": "speed" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_publish": { "allOf": [ { "type": "object", "title": "Publish", "properties": { "url": { "description": "The publish:// URL used to indicate where this stream started in publish mode.\n\nYou can publish videos to Flussonic using the following URLs: \n __RTSP__: rtsp://FLUSSONIC-IP/stream_name \n __HTTP MPEG-TS__: http://FLUSSONIC-IP/stream_name/mpegts \n __RTMP__: rtmp://flussonic-ip/published or rtmp://flussonic-ip/static/published \n __WebRTC__: http://FLUSSONIC-IP/stream_name/whip \n __SRT__: srt://FLUSSONIC-IP:SRT_PORT?streamid=#!::r=STREAM_NAME,m=publish\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "publish://", "pattern": "^publish://.*$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_webrtc_publish_specific" }, { "$ref": "#/components/schemas/stream_input_srt_publish_specific" }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_playlist": { "allOf": [ { "type": "object", "title": "Playlist", "properties": { "url": { "description": "URL to get a stream from playlist.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "playlist://remote.host.com/example.m3u8", "pattern": "^playlist://.*$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_ndi": { "allOf": [ { "type": "object", "title": "NDI", "properties": { "url": { "description": "URL to get a stream from NDI source. Usually NDI software display sources like `My PC (Camera1)`, convert it into `ndi://My PC/Camera1`.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "ndi://hostname/Source1", "pattern": "^ndi://.*$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_frip": { "allOf": [ { "type": "object", "title": "FRIP", "properties": { "url": { "description": "FRIP input. Can be a cmd if started from `-` or an existing socket.\n", "type": "string", "examples": { "socket": { "value": "frip://hostname/Source1" }, "cmd": { "value": "frip://-contrib/devel/simulator.erl" } }, "pattern": "^frip://.*$", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ] }, "socket_dir": { "description": "Directory for shmem files\n", "type": "string", "example": "tmp" }, "shmem_size": { "description": "Size of shared memory buffer. Omit to make it auto\n", "type": "integer", "example": 1024000 } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_spts": { "allOf": [ { "type": "object", "title": "SPTS", "properties": { "url": { "description": "URL to connect to the SPTS source and get the stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "udp://239.0.0.1:1234" }, "udp1": { "value": "udp1://239.0.0.1:1234" }, "udp2": { "value": "udp2://239.0.0.1:1234" }, "udp3": { "value": "udp3://239.0.0.1:1234" } }, "pattern": "^(udp|udp1|udp2|udp3)://.*$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_mpegts_specific" }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_mpts": { "allOf": [ { "type": "object", "title": "MPTS", "properties": { "url": { "description": "URL to get a stream from MPTS source.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "examples": { "default": { "value": "mpts-udp://239.0.0.1:1234" }, "mpts-http": { "value": "mpts-http://239.0.0.1:1234" }, "mpts-https": { "value": "mpts-https://239.0.0.1:1234" }, "mpts-dvb": { "value": "mpts-dvb://asi_10?program=15" }, "dvb": { "value": "dvb://asi_10?program=15" } }, "pattern": "^(mpts-udp|mpts-http|mpts-https|mpts-dvb|dvb)://.*$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_mpegts_specific" }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_mpegts_specific": { "type": "object", "properties": { "programs": { "description": "Choose a program to ingest from an MPEG-TS stream.", "items": { "type": "integer" }, "type": "array", "example": [ 1 ] }, "pids": { "description": "Choose a specific PID to ingest from an MPEG-TS stream. \nPID identifies separate data stream inside the multiplexed MPEG-TS stream. \nIt is possible to set PID values for PMT, SDT, video, and audio tracks.\n", "items": { "type": "integer" }, "type": "array", "example": [ 211 ] }, "no_fix_subs_dts": { "x-private": true, "description": "If this option is enabled, Flussonic will not try to fix subtitles DTS.", "type": "boolean" }, "cc_check": { "x-private": true, "description": "This parameter defines the behavior when getting a CC (Continuity Counter) error.\n", "oneOf": [ { "const": "no", "description": "Do nothing." }, { "const": "log", "description": "Write to the log." }, { "const": "repeat", "description": "Try again." } ] }, "subtitles": { "description": "This configuration is deprecated. Use `dvbocr` configuration field in stream.\n\nThis parameter allows to manage subtitles in an output stream.\n", "oneOf": [ { "const": "drop", "description": "An output stream will have no subtitles track." }, { "const": "accept", "description": "An output stream will have a subtitles track in DVB, without conversion to text (default behavior)." }, { "const": "ocr_replace", "description": "An output stream will have a track containing subtitles converted to a text format (WebVTT)." }, { "const": "ocr_add", "description": "An output stream will have two tracks containing subtitles: \nthe original track with subtitles in DVB and a new track with text subtitles.\n" } ], "example": "drop", "deprecated": true, "x-delete-at": 25.03 }, "closed_captions": { "additionalProperties": { "type": "string" }, "type": "object", "x-key-type": "string", "description": "The rules for handling the closed captions.\n" }, "scte35": { "description": "This option disables processing of SCTE-35 markers from an MPEG-TS input stream.\nDeprecated since 22.12.\nAvailable ways to disable processing of SCTE-35 markers:\n1. pids option to select tracks without SCTE-35 markers\n2. hls_scte35 option from stream_config_media for hls output\n3. performing appropriate tuning pids in the transponder\n", "type": "boolean", "default": true, "example": true, "deprecated": true, "x-delete-at": 23.09 }, "languages": { "additionalProperties": { "type": "string" }, "type": "object", "x-key-type": "mpegts_lang_track", "description": "An array of MPEG-TS language descriptors in format `[{key: track, value: language}]`\n" }, "bypass_psis": { "x-private": true, "description": "The list of PIDs that will transmit PSI tables as video frames (content=metadata).", "items": { "type": "integer" }, "type": "array" }, "try_adts": { "x-private": true, "description": "If this option is enabled, the decoder tries to decode LATM as ADTS.", "type": "boolean" } } }, "stream_input_external": { "allOf": [ { "type": "object", "title": "External", "x-private": true, "properties": { "url": { "description": "URL to make an External stream.\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "smartcam", "central-layouter" ], "example": "ffmpeg -i mmsh://wideo.umk.um", "pattern": "^ffmpeg .*$", "x-format-description": "input_url" } }, "required": [ "url" ] }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "stream_input_st2110": { "allOf": [ { "type": "object", "title": "SMPTE 2110\n", "required": [ "url" ], "properties": { "url": { "description": "SMPTE 2110 UDP multicast group\n", "type": "string", "format": "input_url", "x-api-allow": [ "watcher-core", "vision-config-external", "central-layouter" ], "examples": { "default": { "value": "st2110://239.0.0.1:1234" }, "interface": { "value": "st2110://eth0@239.0.0.1:1234" }, "bind_ip": { "value": "st2110://239.0.0.1:1234/192.168.20.24" } }, "pattern": "^st2110://.*$", "x-format-description": "input_url" }, "width": { "type": "integer", "description": "Must specify received pixel width\n" }, "height": { "type": "integer", "description": "Must specify received pixel height\n" }, "bind_to_core": { "type": "integer", "description": "Optional bind core\n" } } }, { "$ref": "#/components/schemas/stream_input_base" } ] }, "webrtc_abr_opts": { "type": "object", "properties": { "start_track": { "description": "Video track number from which playback starts. Possible values: `v1`, `v2`, `v3` and so on.\n\nIf not specified, or an audio track specified (`start_track=a3`), or a video track number does not exist, \nplayback starts with the track number in the middle of the list (e.g. `v2` if you have tracks `v1`, `v2`, and `v3`) \nand then adjusts to the bandwidth availability.\n\nIf some tracks are excluded by the query parameter `?filter=tracks:...`, Flussonic searches for an available track with a lower number up to v0. \nIf no track with a lower number was found, Flussonic searches for a closest track with a higher number.\n", "type": "string", "example": "v2" }, "loss_count": { "description": "Number of recent packet loss events to consider when switching bitrate.", "default": 2, "type": "integer", "x-private": true }, "up_window": { "description": "Switch bitrate to a higher value if in the last `up_window` number of seconds there were less than `loss_count` lost packets.", "default": 20, "type": "integer", "x-private": true, "example": 17 }, "down_window": { "description": "Switch bitrate to a lower value if in the last `down_window` number of seconds there were more than `loss_count` lost packets.", "default": 5, "type": "integer", "x-private": true, "example": 6 }, "ignore_remb": { "description": "If `true`, Flussonic ignores REMB (Receiver Estimated Maximum Bitrate) reported by the client when switching bitrate to a higher value.\nIf false, the bitrate will not exceed the one sent by the client in the REMB.\n", "default": true, "type": "boolean", "x-private": true, "example": true }, "bitrate_prober": { "description": "If `true`, Flussonic periodically sends `probe` packets to measure available bandwidth and switches bitrate to a higher value if it possible.\n", "default": true, "type": "boolean", "x-private": true, "example": true }, "bitrate_probing_interval": { "description": "How often Flussonic sends `probe` packets in seconds\n", "default": 5, "type": "integer", "x-private": true, "example": 6 } } }, "backup_config": { "type": "object", "properties": { "file": { "description": "Path to the backup file in a VOD location on the server (**not on the local disk!**). \nThe backup file is played to fill in a time interval when the source is down.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/add-secondary-source-for-redundancy/#live-sources-url_file" }, "type": "string", "example": "vod/blank.mp4" }, "timeout": { "description": "The time (in seconds) for Flussonic to switch to the fallback source if the main source stops sending frames. \nThe important thing here is that the source remains active (connected), allowing for a client-publisher to stay on the socket.\nThis option takes any type of frames into account. \n\nIf you do not specify timeout specifically for a fallback source, then in the absence of frames, `source_timeout` of the main source will be used.\n", "type": "integer", "example": 10 }, "audio_timeout": { "description": "The time (in seconds) for Flussonic to switch to the fallback source if the main source stops sending audio frames.", "type": "integer", "example": 5 }, "video_timeout": { "description": "The time (in seconds) for Flussonic to switch to the fallback source if the main source stops sending video frames.", "type": "integer", "example": 4 }, "transcode": { "description": "Whether or not to transcode the backup file. Set to `True` by default. \nIf set to `False`, backup file frames will bypass as-is to the output stream.\n\nShould not be set to `False` unless the backup file has the same stream characteristics\nas the live stream.\n", "externalDocs": { "description": "Find more information here", "url": "https://flussonic.com/doc/add-secondary-source-for-redundancy/#live-sources-backup_transcode" }, "type": "boolean" }, "dvr": { "description": "Whether or not to record a backup to DVR. \nSet to `False` by default (backup is not recorded).\n", "type": "boolean" } } }, "auth_url": { "anyOf": [ { "type": "string", "format": "auth_url", "x-format-description": "This may be a limited amount of schemas or a .lua file on disk\n" }, { "type": "string", "oneOf": [ { "const": "true", "description": "Allow all playback sessions. Use the value to override template value." } ] } ] }, "dvr_info": { "type": "object", "properties": { "from": { "type": "integer", "format": "utc", "description": "The UTC timestamp of the first recording in this archive.", "example": 1641045644, "x-format-description": "Unix timestamp in seconds", "minimum": 1000000000, "maximum": 10000000000 }, "depth": { "type": "integer", "format": "seconds", "description": "The time interval between the start of the *first* recording segment and the end of the *last* one.", "example": 259200, "x-format-description": "seconds" }, "ranges": { "deprecated": true, "x-delete-at": 24.09, "description": "The list of DVR ranges. The param is replaced with `ranges_list` method.", "items": { "allOf": [ { "$ref": "#/components/schemas/dvr_range" } ] }, "type": "array" }, "bytes": { "description": "The size of the recorded archive.", "type": "integer", "format": "bytes", "example": 129600000000, "x-format-description": "bytes" }, "disk_size": { "description": "The size of the recorded archive. Please, use bytes instead.", "type": "integer", "format": "bytes", "example": 1099511627776, "deprecated": true, "x-delete-at": 25.07, "x-format-description": "bytes" }, "duration": { "type": "integer", "format": "seconds", "description": "A total duration of the recorded segments, excluding recording gaps.\nIt can be smaller than depth if you have gaps.\n", "example": 172800, "x-format-description": "seconds" } }, "required": [ "from", "depth", "ranges" ] }, "url_prefix": { "anyOf": [ { "enum": [ false ], "type": "boolean" }, { "type": "string" } ] }, "output_mpegts_pids": { "type": "object", "properties": { "pmt": { "description": "PID of the elementary stream that contains Program Map Table (PMT) in the outgoing MPEG-TS stream.\n\nPMT contains the description of each program and lists the PIDs of elementary streams associated with that program.\nFor instance, a transport stream used in digital television might contain three programs, to represent three television channels. \nSuppose each channel consists of one video stream, one or two audio streams, and any necessary metadata. \nA receiver wishing to decode one of the three channels merely has to decode the payloads of each PID associated with its program. \nIt can discard the contents of all other PIDs.\n", "allOf": [ { "$ref": "#/components/schemas/ts_pid" } ] }, "pcr": { "description": "PID of the elementary stream that contains PCR (Program Clock Reference) in the outgoing MPEG-TS stream.\n\nPCR is the time label used for synchronization of a stream playback with real time. \nAdditionally, for DVB streams it is used for managing a decoder and its buffer. \nIn this case, PCR gives a signal to the frames with DTS