由于业务需要得在前端页面上播放视频,但是摄像头实时的视频流是rtsp格式的,前端不能直接显示rtsp视频流
RTSP(Real Time Streaming Protocol,实时流协议)是一种网络应用层协议,专为在网络上传输实时数据而设计。以下是对RTSP视频流的详细介绍:
定义:RTSP是由RealNetworks和Netscape共同提出的应用层协议,旨在IP网络上高效地传输流媒体数据。它允许客户端向服务器发送播放、暂停、快进等控制命令,但本身并不传输流媒体数据,而是依赖下层传输协议如RTP/RTCP来完成数据传输。 作用:RTSP主要用于控制具有实时特性的数据的发送,为流媒体服务器提供了远程控制功能。
消息交互流程:RTSP使用TCP协议传输,其消息交互流程包括创建RTSP套接字、等待客户端连接、接收RTSP消息请求、处理请求、发送RTSP消息响应以及会话结束后释放资源等步骤。 传输机制:RTSP本身不传输媒体数据,而是通过RTP(实时传输协议)来传输音视频数据。RTP通常使用UDP协议来传输数据,以减少延迟。 URL格式:RTSP的URL格式一般为“rtsp://host[
]/[abs_path]/content_name”,其中host为有效的域名或IP地址,port为端口号(缺省为554),abs_path为绝对路径,content_name为内容名称。RTSP广泛应用于视频监控、视频会议、在线视频播放等需要实时视频流传输的场景。在这些场景中,RTSP作为控制协议,负责管理视频流的播放、暂停、停止等操作,而RTP则负责实际的视频数据传输。
综上所述,RTSP作为一种专为实时数据流传输设计的网络协议,在视频监控、视频会议等领域发挥着重要作用。
如果要在web页面显示视频,必须要转成其他格式的,网上看的其他转换需要安装ffmpeg和nginx,先将视频用ffmpeg分解为hls再用nginx代理,然后前端显示,稍显麻烦,而且我并不想部署太多服务,以及这个形式会导致延迟稍高,后期加摄像头也不够方便,获取视频地址不够动态。以下是我找到的两种比较方便而且比较好的方式
WebRTC-Streamer是一个基于WebRTC技术的开源项目,它允许用户通过Web浏览器进行实时的音视频通信,而无需安装任何额外的插件或软件。
以下是对webrtc-streamer的简单介绍:
核心功能
主要就是将rtsp转为webrtc格式
windows举例
解压webrtc-streamer-v0.8.7-dirty-Windows-AMD64-Release.tar.gz在桌面
进入文件夹,双击webrtc-streamer.exe即可开启服务
docker部署
docker run -p 8000:8000 -it mpromonet/webrtc-streamer
V4L2 设备使用
docker run --device=/dev/video0 -p 8000:8000 -it mpromonet/webrtc-streamer
页面
html<html>
<head>
<script src="libs/adapter.min.js" ></script>
<script src="webrtcstreamer.js" ></script>
<script>        
	var webRtcServer      = null;
	window.onload         = function() { 
		webRtcServer      = new WebRtcStreamer("video","localhost:8000");
		webRtcServer.connect("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov");
	}
	window.onbeforeunload = function() { webRtcServer.disconnect(); }
</script>
</head>
<body> 
	<video id="video" />
</body>
</html>
js文件在压缩包的html目录中
RTSPtoWeb 将您的 RTSP 流转换为可在 Web 浏览器中使用的格式,如 MSE(媒体源扩展)、WebRTC 或 HLS。它是完全原生的 Golang,没有使用 FFmpeg 或 GStreamer!
这个软件功能很强大可以将rtsp转为HLS\HLSLL\MSE\WEBRTC的流格式,而且可以选择让摄像头保持长连接或者按需连接,最重要是有webui可以使用
主要介绍docker部署
直接按照github文档会导致启动的时候出现报错打不开web管理界面,以下是我的经验
拉取
docker pull ghcr.io/deepch/rtsptoweb:latest
创建配置文件 使用ubuntu系统,在桌面创建一个文件夹存放配置文件
mkdir rtsptoweb
进入文件夹
cd rtsptoweb
创建配置文件
vi config.json
粘贴配置内容
json{
"server": {
  "debug": true,
  "log_level": "info",
  "http_demo": true,
  "http_debug": false,
  "http_login": "demo",
  "http_password": "demo",
  "http_port": ":8083",
  "ice_servers": [],
  "rtsp_port": ":554"
},
"streams": {
  "demo1": {
    "name": "test video stream 1",
    "channels": {
      "0": {
        "name": "ch1",
        "url": "rtsp://user:password@ip:554",
        "on_demand": true,
        "debug": false,
        "audio": true,
        "status": 0
      }
    }
  }
},
"channel_defaults": {
  "on_demand": true
}
}
保存并退出
启动服务
docker run --name rtsp-to-web \ -v /home/你的ubuntu用户名/rtsptoweb/config.json:/config/config.json \ -p 8083:8083 --network host ghcr.io/deepch/rtsptoweb:latest
newwork host这个一点非常重要,如果不配置这个可能会导致webrtc不能播放
进入页面
localhost:8083
添加摄像头地址
点击 Add stream
输入摄像头名称和rtsp地址
点击播放就可以进行查看测试
到此位置就完成了部署,但是前端还需要拿到其他格式的流地址
一共有四种流地址
通过接口获取全部的摄像头信息,用于地址组成
在服务器的命令行输入
curl http://demo:demo@127.0.0.1:8083/streams
结果
json{
    "status": 1,
    "payload": {
        "18f9dd80-ee96-4d48-b4f4-c154852ba071": {
            "channels": {
                "0": {
                    "on_demand": true,
                    "url": "rtsp://:localhost:8554/test"
                }
            },
            "name": "test"
        }
    }
GET /stream/{STREAM_ID}/channel/{CHANNEL_ID}/hls/live/index.m3u8 curl http://127.0.0.1:8083/stream/{STREAM_ID}/channel/{CHANNEL_ID}/hls/live/index.m3u8 ffplay http://127.0.0.1:8083/stream/{STREAM_ID}/channel/{CHANNEL_ID}/hls/live/index.m3u8
填上上面摄像头的信息
http://127.0.0.1:8083/stream/18f9dd80-ee96-4d48-b4f4-c154852ba071/channel/0/hls/live/index.m3u8
GET /stream/{STREAM_ID}/channel/{CHANNEL_ID}/hlsll/live/index.m3u8 curl http://127.0.0.1:8083/stream/{STREAM_ID}/channel/{CHANNEL_ID}/hlsll/live/index.m3u8 ffplay http://127.0.0.1:8083/stream/{STREAM_ID}/channel/{CHANNEL_ID}/hlsll/live/index.m3u8
填上上面摄像头的信息
http://127.0.0.1:8083/stream/18f9dd80-ee96-4d48-b4f4-c154852ba071/channel/0/hlsll/live/index.m3u8
MSE /stream/{STREAM_ID}/channel/{CHANNEL_ID}/mse?uuid={STREAM_ID}&channel={CHANNEL_ID} ws://127.0.0.1:8083/stream/{STREAM_ID}/channel/{CHANNEL_ID}/mse?uuid={STREAM_ID}&channel={CHANNEL_ID} NOTE: Use wss for a secure connection.
填上上面摄像头的信息
ws://localhost:8083/stream/18f9dd80-ee96-4d48-b4f4-c154852ba071/channel/0/mse?uuid=18f9dd80-ee96-4d48-b4f4-c154852ba071&channel=0
/stream/{STREAM_ID}/channel/{CHANNEL_ID}/webrtc http://127.0.0.1:8083/stream/{STREAM_ID}/channel/{CHANNEL_ID}/webrtc
填上上面摄像头的信息
http://localhost:8083/stream/18f9dd80-ee96-4d48-b4f4-c154852ba071/channel/0/webrtc
html
html<!doctype html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>RTSPtoWeb HLS example</title>
  </head>
  <body>
    <h1>RTSPtoWeb HLS example</h1>
    <input type="hidden" name="hls-url" id="hls-url"
        value="http://localhost:8083/stream/demo/channel/0/hls/live/index.m3u8">
    <video id="hls-video" autoplay muted playsinline controls
        style="max-width: 100%; max-height: 100%;"></video>
    <script>
      document.addEventListener('DOMContentLoaded', function () {
        const videoEl = document.querySelector('#hls-video')
        const hlsUrl = document.querySelector('#hls-url').value
        if (Hls.isSupported()) {
          const hls = new Hls()
          hls.loadSource(hlsUrl)
          hls.attachMedia(videoEl)
        } else if (video.canPlayType('application/vnd.apple.mpegurl')) {
          videoEl.src = hlsUrl
        }
      })
    </script>
    <script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>
  </body>
</html>
html
html<!doctype html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>RTSPtoWeb HLS-LL example</title>
  </head>
  <body>
    <h1>RTSPtoWeb HLS-LL example</h1>
    <input type="hidden" name="hlsll-url" id="hlsll-url"
        value="http://localhost:8083/stream/demo/channel/0/hlsll/live/index.m3u8">
    <video id="hlsll-video" autoplay muted playsinline controls
        style="max-width: 100%; max-height: 100%;"></video>
    <script>
      document.addEventListener('DOMContentLoaded', function () {
        const videoEl = document.querySelector('#hlsll-video')
        const hlsllUrl = document.querySelector('#hlsll-url').value
        if (Hls.isSupported()) {
          const hls = new Hls()
          hls.loadSource(hlsllUrl)
          hls.attachMedia(videoEl)
        } else if (video.canPlayType('application/vnd.apple.mpegurl')) {
          videoEl.src = hlsllUrl
        }
      })
    </script>
    <script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>
  </body>
</html>
html
html<!doctype html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>RTSPtoWeb MSE example</title>
  </head>
  <body>
    <h1>RTSPtoWeb MSE example</h1>
    <input type="hidden" name="mse-url" id="mse-url"
        value="ws://localhost:8083/stream/demo/channel/0/mse?uuid=demo&channel=0">
    <video id="mse-video" autoplay muted playsinline controls
        style="max-width: 100%; max-height: 100%;"></video>
    <script src="main.js"></script>
  </body>
</html>
main.js
jsdocument.addEventListener('DOMContentLoaded', function () {
  const mseQueue = []
  let mseSourceBuffer
  let mseStreamingStarted = false
  function startPlay (videoEl, url) {
    const mse = new MediaSource()
    videoEl.src = window.URL.createObjectURL(mse)
    mse.addEventListener('sourceopen', function () {
      const ws = new WebSocket(url)
      ws.binaryType = 'arraybuffer'
      ws.onopen = function (event) {
        console.log('Connect to ws')
      }
      ws.onmessage = function (event) {
        const data = new Uint8Array(event.data)
        if (data[0] === 9) {
          let mimeCodec
          const decodedArr = data.slice(1)
          if (window.TextDecoder) {
            mimeCodec = new TextDecoder('utf-8').decode(decodedArr)
          } else {
            mimeCodec = Utf8ArrayToStr(decodedArr)
          }
          mseSourceBuffer = mse.addSourceBuffer('video/mp4; codecs="' + mimeCodec + '"')
          mseSourceBuffer.mode = 'segments'
          mseSourceBuffer.addEventListener('updateend', pushPacket)
        } else {
          readPacket(event.data)
        }
      }
    }, false)
  }
  function pushPacket () {
    const videoEl = document.querySelector('#mse-video')
    let packet
    if (!mseSourceBuffer.updating) {
      if (mseQueue.length > 0) {
        packet = mseQueue.shift()
        mseSourceBuffer.appendBuffer(packet)
      } else {
        mseStreamingStarted = false
      }
    }
    if (videoEl.buffered.length > 0) {
      if (typeof document.hidden !== 'undefined' && document.hidden) {
      // no sound, browser paused video without sound in background
        videoEl.currentTime = videoEl.buffered.end((videoEl.buffered.length - 1)) - 0.5
      }
    }
  }
  function readPacket (packet) {
    if (!mseStreamingStarted) {
      mseSourceBuffer.appendBuffer(packet)
      mseStreamingStarted = true
      return
    }
    mseQueue.push(packet)
    if (!mseSourceBuffer.updating) {
      pushPacket()
    }
  }
  const videoEl = document.querySelector('#mse-video')
  const mseUrl = document.querySelector('#mse-url').value
  // fix stalled video in safari
  videoEl.addEventListener('pause', () => {
    if (videoEl.currentTime > videoEl.buffered.end(videoEl.buffered.length - 1)) {
      videoEl.currentTime = videoEl.buffered.end(videoEl.buffered.length - 1) - 0.1
      videoEl.play()
    }
  })
  startPlay(videoEl, mseUrl)
})
直接在vue中使用main.js可能会导致未能加载ws,从而视频显示不出来,以下是调整方案
tsexport function createMediaSource(videoElement: HTMLVideoElement, url: string, mseQueue: ArrayBuffer[], onUpdate: () => void) {
  const mse = new MediaSource();
  videoElement.src = URL.createObjectURL(mse);
  let mseStreamingStarted = false;
  let sourceBuffer: SourceBuffer | null = null;
  mse.addEventListener("sourceopen", function () {
    const ws = new WebSocket(url);
    ws.binaryType = "arraybuffer";
    ws.onopen = () => {
      console.log("Connected to ws");
    };
    ws.onmessage = (event) => {
      const data = new Uint8Array(event.data);
      if (data[0] === 9) {
        const decodedArr = data.slice(1);
        const mimeCodec = new TextDecoder("utf-8").decode(decodedArr);
        sourceBuffer = mse.addSourceBuffer(`video/mp4; codecs="${mimeCodec}"`);
        sourceBuffer.mode = "segments";
        sourceBuffer.addEventListener("updateend", () => pushPacket());
      } else {
        readPacket(event.data);
      }
    };
  });
  function pushPacket() {
    if (!sourceBuffer || sourceBuffer.updating) return;
    if (mseQueue.length > 0) {
      const packet = mseQueue.shift();
      if (packet) {
        sourceBuffer.appendBuffer(packet);
      }
    } else {
      mseStreamingStarted = false;
    }
    if (videoElement.buffered.length > 0) {
      if (typeof document.hidden !== "undefined" && document.hidden) {
        videoElement.currentTime = videoElement.buffered.end(videoElement.buffered.length - 1) - 0.5;
      }
    }
    onUpdate();
  }
  function readPacket(packet: ArrayBuffer) {
    if (!mseStreamingStarted) {
      if (sourceBuffer) {
        sourceBuffer.appendBuffer(packet);
        mseStreamingStarted = true;
        return;
      }
    }
    mseQueue.push(packet);
    if (sourceBuffer && !sourceBuffer.updating) {
      pushPacket();
    }
  }
}
组件修改
<video ref="afterVideo" autoplay muted playsinline controls style="width: 100%; height: 100%"></video>
html
html<!doctype html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>RTSPtoWeb WebRTC example</title>
  </head>
  <body>
    <h1>RTSPtoWeb WebRTC example</h1>
    <input type="hidden" name="webrtc-url" id="webrtc-url"
        value="http://localhost:8083/stream/demo/channel/0/webrtc">
    <video id="webrtc-video" autoplay muted playsinline controls
        style="max-width: 100%; max-height: 100%;"></video>
    <script src="main.js"></script>
  </body>
</html>
main.js
js
document.addEventListener('DOMContentLoaded', function () {
  function startPlay (videoEl, url) {
    const webrtc = new RTCPeerConnection({
      iceServers: [{
        urls: ['stun:stun.l.google.com:19302']
      }],
      sdpSemantics: 'unified-plan'
    })
    webrtc.ontrack = function (event) {
      console.log(event.streams.length + ' track is delivered')
      videoEl.srcObject = event.streams[0]
      videoEl.play()
    }
    webrtc.addTransceiver('video', { direction: 'sendrecv' })
    webrtc.onnegotiationneeded = async function handleNegotiationNeeded () {
      const offer = await webrtc.createOffer()
      await webrtc.setLocalDescription(offer)
      fetch(url, {
        method: 'POST',
        body: new URLSearchParams({ data: btoa(webrtc.localDescription.sdp) })
      })
        .then(response => response.text())
        .then(data => {
          try {
            webrtc.setRemoteDescription(
              new RTCSessionDescription({ type: 'answer', sdp: atob(data) })
            )
          } catch (e) {
            console.warn(e)
          }
        })
    }
    const webrtcSendChannel = webrtc.createDataChannel('rtsptowebSendChannel')
    webrtcSendChannel.onopen = (event) => {
      console.log(`${webrtcSendChannel.label} has opened`)
      webrtcSendChannel.send('ping')
    }
    webrtcSendChannel.onclose = (_event) => {
      console.log(`${webrtcSendChannel.label} has closed`)
      startPlay(videoEl, url)
    }
    webrtcSendChannel.onmessage = event => console.log(event.data)
  }
  const videoEl = document.querySelector('#webrtc-video')
  const webrtcUrl = document.querySelector('#webrtc-url').value
  startPlay(videoEl, webrtcUrl)
})
本文作者:Weee
本文链接:
版权声明:本博客所有文章除特别声明外,均采用 BY-NC-SA 许可协议。转载请注明出处!