Skip to main content

Pushing HTML5 Video content over ColdFusion WebSockets

I’ve been playing with the WebSocket feature introduced in ColdFusion 10 for some time now. I was trying out pushing images over a ColdFusion WebSocket channel and it worked just fine. But this time I wanted to put WebSockets to test and wanted to push large data at regular intervals. I thought maybe I can push video data over WebSockets and it turned out that there is no direct way to stream video data to many clients. I came across the function - drawImage that can be used to draw an Image or Video on a HTML5 Canvas. Once an image is drawn on the Canvas, it’s base64 encoded data can be obtained by calling the toDataURL function on the Canvas object.  This data can then be transferred over a ColdFusion WebSocket to all subscribers who can then use this  data to draw the image(video frame) on a Canvas.

Here’s the demo video:

Unable to display content. Adobe Flash is required.

If you are not able to view the above video, please visit http://screencast.com/t/C8tyv1TejJpp. Please note, I’m not transferring the audio track present in the Video and I’m still trying to figure how that can be achieved.

Here’s the Publisher code:

<!DOCTYPE html> <html> <body> <cfwebsocket name="socket" onmessage="messageHandler"> <video id="videoElement" controls muted> <source src="windowsill.webm" type="video/webm"> </video> <br> <canvas id="canvasElement" style="border: solid 1px;"> </canvas> <script type="text/javascript"> var context,canvasElement,videoElement, previous, current; //message handler for CF WebSocket messageHandler = function(msg){ } //function to call once the DOM content has been loaded document.addEventListener('DOMContentLoaded', function(){ videoElement = document.getElementById('videoElement'); canvasElement = document.getElementById('canvasElement'); context = canvasElement.getContext('2d'); }); //function to call once the videos meta data is available document.getElementById('videoElement').addEventListener('loadedmetadata', function(){ //set the canvas width and height to videos width and height canvasElement.width = videoElement.videoWidth; canvasElement.height = videoElement.videoHeight; //event listener when the video is played videoElement.addEventListener('play', function(){ //call the draw function draw(this, videoElement.videoWidth, videoElement.videoHeight); }); }); //function to draw the video frame on a temporary canvas at 20fps function draw(video, width, height){ //if the video has been paused or ended return false if (video.paused || video.ended) return false; //draw the current video frame onto a canvas context.drawImage(video, 0, 0, width, height); //get base64 encoded data from Canvas current = canvasElement.toDataURL("image/png"); //just in case if the previous frame is same as current if (previous != current) { //transfer the base64 encode image over a WebSocket socket.publish("myChannel", current); } previous = current; //draw the video frame on the canvas at 20fps by calling the draw function every 50ms setTimeout(draw, 50, video, width, height); } </script> </body> </html>

As you can see from the above code, once you start playing the video the draw function is called. Here I've drawn the video on a Canvas using the drawImage function and then used the function toDataURL to get the base64 encoded data of the image. This is then transferred over a ColdFusion WebSocket channel (‘myChannel’). I’m calling this function (‘draw’) every 50ms to draw the current video frame on the canvas (to achieve 20fps) and transfer the image over a WebSocket.

The client\subscriber on receiving the data, draws  the image (video frame) on a canvas. Here’s the subscriber code:

<!DOCTYPE HTML> <html> <body> <cfwebsocket name="socket" onmessage="messageHandler" onopen="openHandler"> <canvas id="canvasElement" style="border: solid 1px;" width="426" height="240"> </canvas> </body> <script type="text/javascript"> var canvas, context, count = 0, flag = false; var newImage = new Image(); document.addEventListener('DOMContentLoaded', function(){ canvas = document.getElementById('canvasElement'); context = canvas.getContext('2d'); }); function openHandler(){ //subscribe to the CF WebSocket channel socket.subscribe("myChannel", {}, dataHandler); } function messageHandler(msg){ } //function that receives the data from the WebSocket channel function dataHandler(msg){ if (msg.type == 'data') { //function to call when the image is loaded with base64 data newImage.onload = function(){ //draw the image on the canvas context.drawImage(newImage, 0, 0); //set the flags when the above function is complete flag = true; count = 1; } //if ready to be drawn on the canvas if (count == 0 || flag == true) { flag = false; //assign base64 data to the source of the image newImage.src = msg.data; } } } </script> </html>

On the client side, once the data is received over the WebSocket it is assigned to the source of an Image object. The reason why I do this is because the drawImage function takes either an Image or a Video as it's first argument and doesn't allow base64 data. Once the Image is loaded, it is ready to be drawn on the canvas. This process continues until the video ends or the user pauses the video.

Comments

Popular posts from this blog

How to use the APP_INITIALIZER token to hook into the Angular bootstrap process

I've been building applications using Angular as a framework of choice for more than a year and this post is not about another React vs Angular or the quirks of each framework. Honestly, I like Angular and every day I discover something new which makes development easier and makes me look like a guy who built something very complex in a matter of hours which would've taken a long time to put the correct architecture in place if I had chosen a different framework. The first thing that I learned in Angular is the use of the APP_INITIALIZER token.

On GraphQL and building an application using React Apollo

When I visualize building an application, I would think of using React and Redux on the front-end which talks to a set of RESTful services built with Node and Hapi (or Express). However, over a period of time, I've realized that this approach does not scale well when you add new features to the front-end. For example, consider a page that displays user information along with courses that a user has enrolled in. At a later point, you decide to add a section that displays popular book titles that one can view and purchase. If every entity is considered as a microservice then to get data from three different microservices would require three http  requests to be sent by the front-end app. The performance of the app would degrade with the increase in the number of http requests. I read about GraphQL and knew that it is an ideal way of building an app and I need not look forward to anything else. The GraphQL layer can be viewed as a facade which sits on top of your RESTful services o...

Using MobX to manage application state in a React application

I have been writing applications using React and Redux for quite some time now and thought of trying other state management solutions out there. It's not that I have faced any issues with Redux; however, I wanted to explore other approaches to state management. I recently came across MobX  and thought of giving it a try. The library uses the premise of  `Observables` to tie the application state with the view layer (React). It's also an implementation of the Flux pattern wherein it uses multiple stores to save the application state; each store referring to a particular entity. Redux, on the other hand, uses a single store with top-level state variables referring to various entities.