Monday, 29 September 2014


This post will help you to setup Video Chat with your existing .NET Application

 You can download complete chat from here

eStreamChat Demo


Red5 Media Server delivers a powerful video streaming and multi-user solution to the ©Adobe ©Flash Player and other client technologies. Based on Java and some of the most powerful open source frameworks, Red5 stands as a solid solution for business of all sizes including the enterprise. Red5 includes support for the latest multi-user API’s including NetConnection, NetStream and Shared Object’s while providing a powerful RTMP / Servlet implementation. In addition to support for the RTMP protocol, the application server has an embedded Tomcat Servlet container for JEE Web Applications. Application development draws additional benefits from the Spring Framework and Scope based event driven services. By using the Open Source Red5 Media Server, you are developing with a truly open and extensible platform that can be used in Video Conferences, Multi-User Gaming and Enterprise Application Software. Red5 is an Open Source Flash Server written in Java that supports: 

  • Streaming Video (FLV, F4V, MP4, 3GP)
  • Streaming Audio (MP3, F4A, M4A, AAC) 
  • Recording Client Streams (FLV and AVC+AAC in FLV container)
  • Shared Objects
  • Live Stream Publishing
  • Remoting Protocols: RTMP, RTMPT, RTMPS, and RTMPE
  • WebSocket (ws and wss)
  • HLS
  • RTSP (From Axis-type cameras)


First of all Get below Listed software's:-
Java SE Development Kit 8 - Downloads - Oracle RED5

Red5 Media Server
Installation Step
1) Install JDK
If you have installed JDK, please ignore this step and straight jump to Next step
Double-click the installation file jdk-8u20-windows-i586.exe to start installation. It will take some minutes .Only thing you need to  pay attention to installation path which will be b y default
C:\Program Files (x86)\Java

2) System Environment Configuration:
Right click "My Computer", go to "Properties", "Advanced", and select "Environment Variables".

Add following variables: (Suppose the installation path of Java JDK is D:\jdk.)
Add variable PATH and set its value as D:\jdk\bin
Add variable CLASSPATH, value .;D:\jdk\lib;D:\jdk\lib\tools.jar;D:\jdk\lib\dt.jar
Add variable JAVA_HOME, value D:\jdk
Note: Here "." cannot be omitted.

When finishing configuration, we need to test if the JAVA developing environment has been installed successfully or not.

How to Test ?

Open the command-line window and input command "java" and execute it. If there is no error message that means the installation is successful.
You can also go to Control panel and check like show below

So, we are done with JDK installation which is Prerequisite for RED5 server

Now it is Time to Get RED5 Server:
Click and download latest release 1.0 version
Once downloaded run the setup of Red5 which you have downloaded.below file needs to be double clicked

Start the installation of Red5.

Double-click the downloaded file setup-red5-0.6.2.exe to show installation wizard.

Following the wizard, it requires to select a path for JAVA Runtime Environment (JRE). The installation will search for the path automatically, and if it failed, please define one manually.

Then select the installation directory of Red5 and non-system disk is recommended, for example: C:\red5.

And then select components to install. By default, all items are selected and we do not suggest changing the settings.

There are installation options in "Select Additional Tasks".

If you want to set it as system service, pick "Register as service". (Recommended)
If you want to "Create a desktop icon", tick the left checkbox.
If you want to "Download sample streams", tick the left checkbox.

4) Red5 Configuration

Last, pay attention to the system configuration of Red5.

RTMP port is the serving port of Red5 and it is the communication port of server and client.
HTTP servlet engine port is the http communication port of Red5 and it is mostly used by administrator.
Make sure Red5 service is running like show below

Open you browser and type http://localhost:5080/. if you could see below screen then your server is working perfectly .

 1) Go to 
2) download:eStream .net solution.

Why eStreamChat?

Faster and better chat! The software uses HTML5 and jQuery for most of the functionality and no plugins (Java, Flash, etc) are used unless necessary (e.g. for webcam broadcast).
Fully Unicode! Forget about encoding problems. The software is UTF-8 based and supports any language - including right-to-left ones
Open Source! Unlike closed proprietary systems eStreamChat gives you the ability to see what happens under the hood, customize everything and integrate with any site
Scalable! The chat backend is written in C# and can be easily scaled to multiple servers to support a huge amount of simultaneous users

Open solution and change config file setting as explained below

3) Need to change configuration section

<setting name="FlashMediaServer" serializeAs="String">


<setting name="FlashMediaServer" serializeAs="String">

4) run solution in visual studio or deployee in webroot. you will see below screen

we are done. hope this helps

Tuesday, 2 September 2014



WebRTC will have an impact on the future of Unified Communications.WebRTC was started by Google with the goal to build a standards-based real time media engine implemented in all of the available browsers. A browser with WebRTC a web services application can direct the browser to establish a real time voice or video RTP connection to another WebRTC device or to a WebRTC media server. WebRTC APIs and the media engine define the communications path.


I have never seen any proper or complete solution for video streaming in web application. Well, I do understand that people have answer like "Yes we use Flash or SilverLight along with web technologies and RED5 and Adobe media servers" but, i am taking about pure web based solution and peer to peer(P2P) like shown below:-
You see a signalling server but fact is that It is used just for handshking purpose or you can say just to intiate connection between Peers.Whatever you choose, you'll need an intermediary server to exchange signaling messages and application data between clients. Unfortunately, a web app cannot simply scream around on internet and say 'Connect me to shabir'. Nothing to worry because signaling messages are small, and mostly exchanged at the beginning of call

without Integrating RTC technology with existing content and services has been difficult and time consuming, particularly on the web and when it comes to cost ,It requiring expensive audio and video technologies to be licensed or developed in house
Latest technology known as WebRTC is answer for above question or the very simple explanation is that WebRTC enables browser-to-browser audio and video conferencing. The user can initiate  a call by clicking on an icon representing the other endpoint.What is significant and great  is that a separate conferencing client isn’t needed. And, the only technology needed by the partner is a standard, up-to-date browser.  
What all technologies are working under hood or behind all this ? The WebRTC engine within the browser uses HTML5 and Java scripting to develop fairly simple routines to capture, control, and send audio and video between two browsers. 
How WebRTC is going to help us in exchange of real-time media between two browsers. Finally ,workflow  for this type of communication  is like
  1. At the media source, input devices are opened for capture. ( getUserMedia( this is html5 API which is used to access your media related devices )
  2. Media from the input devices is encoded  and transmitted across the network.
  3. At the media destination, the packets are decoded and formed into a media stream.
  4. The media stream is sent to output devices. ( onaddstream ) 
Luckily , the browser hides most of this complexity behind three primary APIs:
  • MediaStream: acquisition of audio and video streams
  • RTCPeerConnection: communication of audio and video data
  • RTCDataChannel: communication of arbitrary application data
All it takes is a some lines of JavaScript code, and any web application can enable a rich teleconferencing experience with peer-to-peer data transfers. That’s the promise and the power of WebRTC! 

Assume that you have requirement to setup your own WebRTC based Video/Audio/Chat conferencing within your existing Web Application.What all you need ?


Basic knowledge:
  1. HTML, CSS and JavaScript
  2. git
  3. Chrome DevTools

Experience of Node.jsand socket.iowould also be useful. Installed on your development machine:
  1. Google Chrome or Firefox.
  2. Code editor.
  3. Web cam.
  4. git, in order to get the source code.
  5. The source code.
  6. Node.js with and node-static. (Node.js hosting would also be an advantage )
A true functional WebRTC application needs about 2-3 servers setup to get the complete system up. Don't be afraid, these are just servers which are easy to setup and will give you  complete control on video/audio chat of the participants.
  1. SIGNALLING SERVER The signaling server is your own implementation for managing and communicating between users. It also helps exchange information to get the live video feed started. There are several methods for setting up the back-end. I personally prefer to use Node.js server with
  2. STUN SERVER- The second and third serves are the STUN and TURN servers. These servers help users connect to each other to handle the actual live video and data channel messages. Where they are different is STUN helps users connect directly to each other so they can communicate. 
  3. TURN SERVER - When the STUN server can’t make the connection due to firewalls or other network issues then that’s when the TURN server is used. TURN acts as the middleman to connect the users. Some TURN servers can also act as STUN servers. If this is the case then a separate STUN server is not required. 
Or , there are signalling servers which you access free  i could search one listed below  

// using web-sockets for signaling!
var SIGNALING_SERVER = 'wss://';

For the STUN and TURN servers, there are some places on the web that offer an open connection for free but you will have to search for them. But our motto here is to setup our own servers and have control on them.I am sure this is wise decision rather than relying on others.
 I set up some video and text chat based samples on my website


  1. The key difference between these two types of solutions though is that media will travel directly between both endpoints if STUN (Simple Traversal of UDP through NAT)is used, whereas media will be proxied through the server if TURN is utilized.
  2. TURN is preferred because it is capable to traverse symmetric NATs too. However, STUN is useful to speed up the connection out of getting immediate candidates when users are sitting behind same NAT e.g. LAN.
  3. A media relay server or ICE server is utilized to setup the media session and provide the list of potential candidates to both parties in a call regardless of which media delivery option is selected for each end of the call.
  4. Also understand that the media stream may not always use the same solution on both ends as STUN may be possible for one endpoint but not for the other endpoint.
  5. When we use  both STUN and TURN servers; STUN is always attempted; TURN is used as fall-back option depending on client locations and network topologies.
     var iceServers = {
     iceServers: [STUN, TURN]};
  1. TURN protocol runs top of STUN to setup a relay service. A well written TURN server will also function as STUN; so you can skip a "separate STUN server" option in such case.
  2. TURN is developed to cover holes haven't (or may not) punched by the STUN; e.g. SNATs i.e. Symmetric NATs.
  3. A critical disadvantage of a TURN server is its cost; and huge bandwidth usage in case when HD video stream is delivered.
  4. When the protocol was updated to include support for TCP the name was changed to Session Traversal Utilities for NAT to reflect that it was no longer limited to UDP traffic.
  5. Although media leveraging STUN is not a direct host-to-host session it is the next best option as the media path is still sent directly between the two client’s own firewalls, over the Internet.

What is signaling?

Signaling is the process of coordinating communication. In order for a WebRTC application to set up a 'call', its clients need to exchange information:
  • Session control messages used to open or close communication.
  • Error messages.
  • Media metadata such as codecs and codec settings, bandwidth and media types.
  • Key data, used to establish secure connections.
  • Network data, such as a host's IP address and port as seen by the outside world.
This signaling process needs a way for clients to pass messages back and forth. That mechanism is not implemented by the WebRTC APIs: you need to build it yourself. We describe below some ways to build a signaling service. First, however, a little context...

How Communication is achieved:

What is STUN?

Session Traversal Utilities for NAT (STUN) (acronym within an acronym) is a protocol to discover your public address and determine any restrictions in your router that would prevent a direct connection with a peer.
The client will send a request to a STUN server on the internet who will reply with the client’s public address and whether or not the client is accessible behind the router’s NAT.

What is NAT?

Network Address Translation (NAT) is used to give your device a public IP address. A router will have a public IP address and every device connected to the router will have a private IP address. Requests will be translated from the device’s private IP to the router’s public IP with a unique port. That way you don’t need a unique public IP for each device but can still be discovered on the internet.
Some routers will have restrictions on who can connect to devices on the network. This can mean that even though we have the public IP address found by the STUN server, not anyone can create a connection. In this situation we need to turn to TURN.

What is TURN?

Some routers using NAT employ a restriction called ‘Symmetric NAT’. This means the router will only accept connections from peers you’ve previously connected to.
Traversal Using Relays around NAT (TURN) is meant to bypass the Symmetric NAT restriction by opening a connection with a TURN server and relaying all information through that server. You would create a connection with a TURN server and tell all peers to send packets to the server which will then be forwarded to you. This obviously comes with some overhead so is only used if there are no other alternatives.

What is SDP?

Session Description Protocol (SDP) is a standard for describing the multimedia content of the connection such as resolution, formats, codecs, encryption, etc so that both peers can understand each other once the data is transferring. This is not the media itself but more the metadata.

What is an ICE candidate?

As well as exchanging information about the media (discussed above in Offer/Answer and SDP), peers must exchange information about the network connection. This is known as an ICE candidate and details the available methods the peer is able to communicate (directly or through a TURN server).

Architectural and implementation Overview of WebRTC?

To avoid redundancy and to maximize compatibility with established technologies, signaling methods and protocols are not specified by WebRTC standards. This approach is outlined by JSEP (JavaScript Session Establishment Protocol)
JSEP's architecture also ignores a browser having to save state: that is, to function as a signaling state machine. This would be problematic if, for example, signaling data was lost each time a page was reloaded. Instead, signaling state can be saved on a server.


What is the goal of WebRTC?
WebRTC aims to give the development community access to open, high-quality, real-time communications technology. Before WebRTC, this type of RTC technology has only been available to large corporations who can afford the expensive licensing fees or through proprietary plugins like Adobe Flash. WebRTC will open the door for a new wave of video, voice, and data web applications.

What is in the future?
Till now all most all  vendors have joined hands and provided full fledged HTML5 API support in their respective browser to integrate WebRTC .Microsoft has not yet. Once this gets matured WebRTC is going to make your online customers video conference fun. There will be ongoing discussions  as to what video codec(s) will be implemented.
just browser will start video chat with other end use  WebRTC may be the next wave in real time communications over IP networks. If you have not heard of WebRTC ,It is right time to get into it.
Much of the promise of WebRTC is in the fact that it provides APIs from the browser to the underlying hardware. An HTM5 command, GetUserMedia, is a key feature that can execute capture of a codecs output.
It’s currently built into Chrome 21, Opera 12, Firefox 17, and Internet Explorer (via Chrome Frame). TenHands, a startup video conferencing company, has embedded their WebRTC capability into FaceTime. .
Thanks to Muaz who has done excellent and mind blowing work for webRTC . You can see live P2P Samples which he has done purely using JavaScript 


 Apple is silient .so safari doesn't support webRtc.


Microsoft submitted an alternative proposal to the W3C WebRTC 1.0 Working Draft dubbed CU-RTC-Web (Customizable, Ubiquitous Real-Time Communication)
How is the CU-RTC Web standard different than WebRTC?
The Microsoft draft outlines a low-level API that allows developers more direct access to the underlying network and media-delivery components. It exposes objects representing network sockets and gives explicit application control over the media transport.

In contrast, the WebRTC API abstracts these details with a text-based interface that passes encoded strings between the two participants in the call. With the WebRTC draft, developers are responsible for passing the strings between communicating browsers, but not explicitly configuring media transport for a video chat. 


What is WebRTC?

WebRTC is an open framework for the web that enables Real Time Communications in the browser. It includes the fundamental building blocks for high quality communications on the web such as network, audio and video components used in voice and video chat applications.

These components, when implemented in a browser, can be accessed through a Javascript API, enabling developers to easily implement their own RTC web app. 

The WebRTC effort is being standardized on a API level at the W3C and at the protocol level at the IETF.

Why should I use WebRTC?

We think you'll want to build your next video chat style application using WebRTC. Here's why:
  • A key factor in the success of the Internet is that its core technologies such as HTML, HTTP, and TCP/IP are open and freely implementable. Currently, there is no free, high quality, complete solution available that enables communication in the browser.  WebRTC is a package that enables this.

  • Already integrated with best-of-breed voice and video engines that have been deployed on millions of end points over the last 8+ years. Google is not charging royalties for this technology.

  • Includes and abstracts key NAT and firewall traversal technology using STUN, ICE, TURN, RTP-over-TCP and support for proxies.

  • Builds on the strength of the web browser: WebRTC abstracts signaling by offering a signaling state machine that maps directly to PeerConnection. Web developers can therefore choose the protocol of choice for their usage scenario (for example, but not limited to: SIP, XMPP/Jingle, etc...).

What is the Opus audio codec?

Opus is a royalty free codec defined by IETF RFC 6176.  It supports constant and variable bitrate encoding from 6 kbit/s to 510 kbit/s, frame sizes from 2.5 ms to 60 ms, and various sampling rates from 8 kHz (with 4 kHz bandwidth) to 48 kHz (with 20 kHz bandwidth, where the entire hearing range of the human auditory system can be reproduced). 

What is the iSAC audio codec?

iSAC is a robust, bandwidth adaptive, wideband and super-wideband voice codec developed by Global IP Solutions used in many Voice over IP (VoIP) and streaming audio applications. iSAC is used by industry leaders in hundreds of millions of VoIP endpoints. This codec is included as part of the WebRTC project.

What is the iLBC audio codec?

iLBC is a free narrowband voice codec that was developed by Global IP Solutions used in many Voice over IP (VoIP) and streaming audio applications. In 2004, the final IETF RFC versions of the iLBC codec spec and the iLBC RTP Profile draft became available. This codec is included as part of the WebRTC project.

What is the VP8 video codec?

VP8 is a highly efficient video compression technology that was developed by On2 Technologies. Google acquired On2 in February 2010 and made it available as part of the WebM Project. It is the video codec included the WebRTC project. 

What other components are included in the WebRTC package?


The WebRTC project offers a complete stack for voice communications. It includes not only the necessary codecs, but other components crucial for a great experience. This includes software based acoustic echo cancellation (AEC), automatic gain control (AGC), noise reduction, noise suppression and hardware access and control across multiple platforms.


The WebRTC project builds on the VP8 codec, introduced in 2010 as part of the WebM Project. It includes components to conceal packet loss, clean up noisy images as well as capture and playback capabilities across multiple platforms.


Dynamic jitter buffers and error concealment techniques are included for audio and video that help mitigate the effects of packet loss and unreliable networks. Also included are components for establishing a Peer to Peer connection using ICE / STUN / Turn / RTP-over-TCP and support for proxies. This technology comes in part from the libjingle project.

How do I access the WebRTC code?

Go to

How can I test the quality of WebRTC components?

We have put an early preview sample application here.

Are WebRTC components subject to change?

WebRTC is based on a API that is still under development through efforts at WHATWG, W3C and IETF. We hope to get to a stable API once a few browser vendors have implementations ready for testing. Once the API is stable, our goal will be to offer backwards compatibility and interoperability. The WebRTC API layer will be our main focus for stability and interoperability. The components under it may be modified to improve quality, performance and feature set.

How can I implement my own renderer or add my own hooks in the WebRTC Platform?

To do this, please take a look at the external renderer API.

WebRTC components are open-source. How do I get the source and contribute code?

Please see Getting started and Contributing bug fixes for more information.

To be a Contributor, do I need to sign any agreements?

Yes, each Contributor must sign and return the Contributor License Agreement.

Do I have to be a programmer to use WebRTC?

Yes, to build WebRTC support into a software application or contribute improvements, programming skills are required. However, usage of the Javascript APIs that call WebRTC in the browsers will only require typical web development skills.

Is the WebRTC project owned by Google or is it independent?

WebRTC is an open-source project supported by Google, Mozilla and Opera. The API and underlying protocols are being developed jointly at the W3C and IETF.

Are the WebRTC components from Google’s acquisition of Global IP Solutions?

Yes some components, such as VoiceEngine, VideoEngine, NetEQ, AEC, etc all stem from the GIPS acquisition.

What codecs are supported in WebRTC?

The currently supported voice codecs are G.711, G.722, iLBC, and iSAC, and VP8 is the supported video codec. The list of supported codecs may change in the future.

Please explain how WebRTC is free of charge?

Some software frameworks, voice and video codecs require end-users, distributors and manufacturers to pay patent royalties to use the intellectual property within the software technology and/or codec. Google is not charging royalties for WebRTC and its components including the codecs it supports (VP8 for video and iSAC and iLBC for audio).  For more information, see the License page

What does this license let me do?

Like most BSD licenses, this license allows you to use the WebRTC code with a minimum of restrictions on your use. You can use the code in proprietary software as well as open source software.

Do I need to release the source if I make changes?

No, the license does not require you to release source if you make changes. However, we would love to see any changes you make and possibly incorporate them, so if you want to participate please visit the code page and submit some patches.

Why is there a separate patent grant?

In order to decouple patents from copyright, thus preserving the pure BSD nature of the copyright license, the license and the patent grant are separate. This means we are using a standard (BSD) open source copyright license, and the patent grant can exist on its own. This makes WebRTC compatible with all major license scenarios.

What if someone gets the code from Google and gives it to me without changes. Do I have a patent grant from Google?

Yes, you still have the right to redistribute and you still have a patent license for Google's patents that cover the code that Google released.

What if someone makes a change to the code and gives it to me. Do I have a patent license from Google for that change?

You still have the right to redistribute but no patent license for the changes (if there are any patents covering it). We can't give patent licenses for changes people make after we distribute the code, as we have no way to predict what those changes will be. Other common licenses take the same approach, including the Apache license.

What if Google receives or buys a patent that covers the code I receive sometime after I receive the code. Do I have a patent grant for that patent?

Yes, you still have the right to redistribute and you still have a patent license for Google's patents that cover the code that Google released.

What if my competitor uses the code and brings patent litigation against me for something unrelated to the code. Does he or she still have a patent license?

Yes, he/she still has the right to redistribute and he/she still has a patent license for Google's patents that cover the code that Google released.

Muaz Khan / WebRTC Developer!


Single Page Demos

  1. Simplest Example!
  2. Simple Demo using
  3. Simple Demo using WebSocket
  4. A few other single-page demos can be found here

Real Demos

  1. Simple demo using WebSockets / Source Code
  2. Simple demo using / Source Code



Other Tutorials

  1. Are you interested in a "more" simple full-fledged guide? Read this tutorial.
  2. Are you interested in a "beginners" guide? Read this tutorial.
  3. You can find many tutorials here: