OVERVIEW:
WebRTC will have an impact on the future of Unified Communications.WebRTC was started by Google with the goal to build a standards-based real time media engine implemented in all of the available browsers. A browser with WebRTC a web services application can direct the browser to establish a real time voice or video RTP connection to another WebRTC device or to a WebRTC media server. WebRTC APIs and the media engine define the communications path.Explanation:
I have never seen any proper or complete solution for video streaming in web application. Well, I do understand that people have answer like "Yes we use Flash or SilverLight along with web technologies and RED5 and Adobe media servers" but, i am taking about pure web based solution and peer to peer(P2P) like shown below:-
You see a signalling server but fact is that It is used just for handshking purpose or you can say just to intiate connection between Peers.Whatever you choose, you'll need an intermediary server to exchange signaling messages and application data between clients. Unfortunately, a web app cannot simply scream around on internet and say 'Connect me to shabir'. Nothing to worry because signaling messages are small, and mostly exchanged at the beginning of call
without Integrating RTC technology with existing content and services has been difficult and time consuming, particularly on the web and when it comes to cost ,It requiring expensive audio and video technologies to be licensed or developed in house
Latest technology known as WebRTC is answer for above question or the very simple explanation is that WebRTC enables browser-to-browser audio and video conferencing. The user can initiate a call by clicking on an icon representing the other endpoint.What is significant and great is that a separate conferencing client isn’t needed. And, the only technology needed by the partner is a standard, up-to-date browser.
What all technologies are working under hood or behind all this ? The WebRTC engine within the browser uses HTML5 and Java scripting to develop fairly simple routines to capture, control, and send audio and video between two browsers.
How WebRTC is going to help us in exchange of real-time media between two browsers. Finally ,workflow for this type of communication is like
- At the media source, input devices are opened for capture. ( getUserMedia( this is html5 API which is used to access your media related devices )
- Media from the input devices is encoded and transmitted across the network.
- At the media destination, the packets are decoded and formed into a media stream.
- The media stream is sent to output devices. ( onaddstream )
Luckily , the browser hides most of this complexity behind three primary APIs:
Experience of Node.jsand socket.iowould also be useful. Installed on your development machine:
-
MediaStream
: acquisition of audio and video streams -
RTCPeerConnection
: communication of audio and video data -
RTCDataChannel
: communication of arbitrary application data
All it takes is a some lines of JavaScript
code, and any web application can enable a rich teleconferencing
experience with peer-to-peer data transfers. That’s the promise and the
power of WebRTC!
Assume that you have requirement to setup your own WebRTC based Video/Audio/Chat conferencing within your existing Web Application.What all you need ?
Prerequisites
Basic knowledge:- HTML, CSS and JavaScript
- git
- Chrome DevTools
Experience of Node.jsand socket.iowould also be useful. Installed on your development machine:
- Google Chrome or Firefox.
- Code editor.
- Web cam.
- git, in order to get the source code.
- The source code.
- Node.js with socket.io and node-static. (Node.js hosting would also be an advantage )
- SIGNALLING SERVER The signaling server is your own implementation for managing and communicating between users. It also helps exchange information to get the live video feed started. There are several methods for setting up the back-end. I personally prefer to use Node.js server with Socket.io.
- STUN SERVER- The second and third serves are the STUN and TURN servers. These servers help users connect to each other to handle the actual live video and data channel messages. Where they are different is STUN helps users connect directly to each other so they can communicate.
- TURN SERVER - When the STUN server can’t make the connection due to firewalls or other network issues then that’s when the TURN server is used. TURN acts as the middleman to connect the users. Some TURN servers can also act as STUN servers. If this is the case then a separate STUN server is not required.
Or , there are signalling servers which you access free i could search one listed below
// using web-sockets for signaling!
var SIGNALING_SERVER = 'wss://wsnodejs.nodejitsu.com:443';
For the STUN and TURN servers, there are some
places on the web that offer an open connection for free but you will have to
search for them. But our motto here is to setup our own servers and have control
on them.I am sure this is wise decision rather than relying on others.
I set up some video and text chat based samples on my website
http://shabirhakim.net/chat/
I set up some video and text chat based samples on my website
http://shabirhakim.net/chat/
STUN SERVER or TURN SERVER?
- The key difference between these two types of solutions though is that media will travel directly between both endpoints if STUN (Simple Traversal of UDP through NAT)is used, whereas media will be proxied through the server if TURN is utilized.
- TURN is preferred because it is capable to traverse symmetric NATs too. However, STUN is useful to speed up the connection out of getting immediate candidates when users are sitting behind same NAT e.g. LAN.
- A media relay server or ICE server is utilized to setup the media session and provide the list of potential candidates to both parties in a call regardless of which media delivery option is selected for each end of the call.
- Also understand that the media stream may not always use the same solution on both ends as STUN may be possible for one endpoint but not for the other endpoint.
- When we use both STUN and TURN servers; STUN is always attempted; TURN is used as fall-back option depending on client locations and network topologies.
var iceServers = {
iceServers: [STUN, TURN]};
- TURN protocol runs top of STUN to setup a relay service. A well written TURN server will also function as STUN; so you can skip a "separate STUN server" option in such case.
- TURN is developed to cover holes haven't (or may not) punched by the STUN; e.g. SNATs i.e. Symmetric NATs.
- A critical disadvantage of a TURN server is its cost; and huge bandwidth usage in case when HD video stream is delivered.
- When the protocol was updated to include support for TCP the name was changed to Session Traversal Utilities for NAT to reflect that it was no longer limited to UDP traffic.
- Although media leveraging STUN is not a direct host-to-host session it is the next best option as the media path is still sent directly between the two client’s own firewalls, over the Internet.
What is signaling?
Signaling is the process of coordinating communication. In order for a
WebRTC application to set up a 'call', its clients need to exchange
information:
- Session control messages used to open or close communication.
- Error messages.
- Media metadata such as codecs and codec settings, bandwidth and media types.
- Key data, used to establish secure connections.
- Network data, such as a host's IP address and port as seen by the outside world.
This signaling process needs a way for clients to pass messages back and forth.
That mechanism is not implemented by the WebRTC APIs: you need to build it
yourself. We describe below some ways to build a signaling service. First,
however, a little context...
How Communication is achieved:
What is STUN?
Session Traversal Utilities for NAT (STUN) (acronym within an acronym) is a protocol to discover your public address and determine any restrictions in your router that would prevent a direct connection with a peer.The client will send a request to a STUN server on the internet who will reply with the client’s public address and whether or not the client is accessible behind the router’s NAT.
What is NAT?
Network Address Translation (NAT) is used to give your device a public IP address. A router will have a public IP address and every device connected to the router will have a private IP address. Requests will be translated from the device’s private IP to the router’s public IP with a unique port. That way you don’t need a unique public IP for each device but can still be discovered on the internet.Some routers will have restrictions on who can connect to devices on the network. This can mean that even though we have the public IP address found by the STUN server, not anyone can create a connection. In this situation we need to turn to TURN.
What is TURN?
Some routers using NAT employ a restriction called ‘Symmetric NAT’. This means the router will only accept connections from peers you’ve previously connected to.Traversal Using Relays around NAT (TURN) is meant to bypass the Symmetric NAT restriction by opening a connection with a TURN server and relaying all information through that server. You would create a connection with a TURN server and tell all peers to send packets to the server which will then be forwarded to you. This obviously comes with some overhead so is only used if there are no other alternatives.
What is SDP?
Session Description Protocol (SDP) is a standard for describing the multimedia content of the connection such as resolution, formats, codecs, encryption, etc so that both peers can understand each other once the data is transferring. This is not the media itself but more the metadata.What is an ICE candidate?
As well as exchanging information about the media (discussed above in Offer/Answer and SDP), peers must exchange information about the network connection. This is known as an ICE candidate and details the available methods the peer is able to communicate (directly or through a TURN server).Architectural and implementation Overview of WebRTC?
To avoid redundancy and to maximize compatibility with established technologies, signaling methods and protocols are not specified by WebRTC standards. This approach is outlined by JSEP (JavaScript Session Establishment Protocol)JSEP's architecture also ignores a browser having to save state: that is, to function as a signaling state machine. This would be problematic if, for example, signaling data was lost each time a page was reloaded. Instead, signaling state can be saved on a server.
Continue.....
What is the goal of WebRTC?
WebRTC aims to give the development community access to open, high-quality, real-time communications technology. Before WebRTC, this type of RTC technology has only been available to large corporations who can afford the expensive licensing fees or through proprietary plugins like Adobe Flash. WebRTC will open the door for a new wave of video, voice, and data web applications.
What is in the future?
Till now all most all vendors have joined hands and provided full
fledged HTML5 API support in their respective browser to integrate WebRTC .Microsoft
has not yet. Once this gets matured WebRTC is going to make your online customers video conference fun. There will be ongoing discussions as to what video codec(s)
will be implemented.
just browser will start video chat with other end use WebRTC may be the next wave in real time communications over IP networks. If you have not heard of WebRTC ,It is right time to get into it.
Much of the promise of WebRTC is in the fact that it provides APIs from the
browser to the underlying hardware. An HTM5 command, GetUserMedia, is a key
feature that can execute capture of a codecs output.
It’s currently built into Chrome 21, Opera 12, Firefox 17, and Internet Explorer (via Chrome Frame). TenHands, a startup video conferencing company, has embedded their WebRTC capability into FaceTime. .
Thanks to Muaz who has done excellent and mind blowing work for webRTC . You can see live P2P Samples which he has done purely using JavaScript
WHAT APPLE SAYS:
Apple is silient .so safari doesn't support webRtc.
WHAT MICROSOFT SAYS:-
Microsoft submitted an alternative proposal to the W3C WebRTC 1.0 Working Draft dubbed CU-RTC-Web (Customizable, Ubiquitous Real-Time Communication)
How is the CU-RTC Web standard different than WebRTC?
The Microsoft draft outlines a low-level API that allows developers more direct access to the underlying network and media-delivery components. It exposes objects representing network sockets and gives explicit application control over the media transport.
In contrast, the WebRTC API abstracts these details with a text-based interface that passes encoded strings between the two participants in the call. With the WebRTC draft, developers are responsible for passing the strings between communicating browsers, but not explicitly configuring media transport for a video chat.
WEBRTC FAQ'S
References
just browser will start video chat with other end use WebRTC may be the next wave in real time communications over IP networks. If you have not heard of WebRTC ,It is right time to get into it.
It’s currently built into Chrome 21, Opera 12, Firefox 17, and Internet Explorer (via Chrome Frame). TenHands, a startup video conferencing company, has embedded their WebRTC capability into FaceTime. .
Thanks to Muaz who has done excellent and mind blowing work for webRTC . You can see live P2P Samples which he has done purely using JavaScript
WHAT APPLE SAYS:
Apple is silient .so safari doesn't support webRtc.
WHAT MICROSOFT SAYS:-
Microsoft submitted an alternative proposal to the W3C WebRTC 1.0 Working Draft dubbed CU-RTC-Web (Customizable, Ubiquitous Real-Time Communication)
How is the CU-RTC Web standard different than WebRTC?
The Microsoft draft outlines a low-level API that allows developers more direct access to the underlying network and media-delivery components. It exposes objects representing network sockets and gives explicit application control over the media transport.
In contrast, the WebRTC API abstracts these details with a text-based interface that passes encoded strings between the two participants in the call. With the WebRTC draft, developers are responsible for passing the strings between communicating browsers, but not explicitly configuring media transport for a video chat.
WEBRTC FAQ'S
What is WebRTC?
WebRTC is an open framework for the web that enables Real Time Communications in the browser. It includes the fundamental building blocks for high quality communications on the web such as network, audio and video components used in voice and video chat applications.
These components, when implemented in a browser, can be accessed
through a Javascript API, enabling developers to easily implement their
own RTC web app.
The WebRTC effort is being standardized on a API level at the W3C and at the protocol level at the IETF.
Why should I use WebRTC?
We think you'll want to build your next video chat style application using WebRTC. Here's why:- A key factor in the success of the Internet is that its core
technologies such as HTML, HTTP, and TCP/IP are open and freely
implementable. Currently, there is no free, high quality, complete
solution available that enables communication in the browser. WebRTC is
a package that enables this.
- Already integrated with best-of-breed voice and video engines that
have been deployed on millions of end points over the last 8+
years. Google is not charging royalties for this technology.
- Includes and abstracts key NAT and firewall traversal technology using STUN, ICE, TURN, RTP-over-TCP and support for proxies.
- Builds on the strength of the web browser: WebRTC abstracts signaling by offering a signaling state machine that maps directly to PeerConnection. Web developers can therefore choose the protocol of choice for their usage scenario (for example, but not limited to: SIP, XMPP/Jingle, etc...).
What is the Opus audio codec?
Opus is a royalty free codec defined by IETF RFC 6176. It supports constant and variable bitrate encoding from 6 kbit/s to 510 kbit/s, frame sizes from 2.5 ms to 60 ms, and various sampling rates from 8 kHz (with 4 kHz bandwidth) to 48 kHz (with 20 kHz bandwidth, where the entire hearing range of the human auditory system can be reproduced).What is the iSAC audio codec?
iSAC is a robust, bandwidth adaptive, wideband and super-wideband voice codec developed by Global IP Solutions used in many Voice over IP (VoIP) and streaming audio applications. iSAC is used by industry leaders in hundreds of millions of VoIP endpoints. This codec is included as part of the WebRTC project.What is the iLBC audio codec?
iLBC is a free narrowband voice codec that was developed by Global IP Solutions used in many Voice over IP (VoIP) and streaming audio applications. In 2004, the final IETF RFC versions of the iLBC codec spec and the iLBC RTP Profile draft became available. This codec is included as part of the WebRTC project.What is the VP8 video codec?
VP8 is a highly efficient video compression technology that was developed by On2 Technologies. Google acquired On2 in February 2010 and made it available as part of the WebM Project. It is the video codec included the WebRTC project.What other components are included in the WebRTC package?
Audio
The WebRTC project offers a complete stack for voice communications. It includes not only the necessary codecs, but other components crucial for a great experience. This includes software based acoustic echo cancellation (AEC), automatic gain control (AGC), noise reduction, noise suppression and hardware access and control across multiple platforms.Video
The WebRTC project builds on the VP8 codec, introduced in 2010 as part of the WebM Project. It includes components to conceal packet loss, clean up noisy images as well as capture and playback capabilities across multiple platforms.Network
Dynamic jitter buffers and error concealment techniques are included for audio and video that help mitigate the effects of packet loss and unreliable networks. Also included are components for establishing a Peer to Peer connection using ICE / STUN / Turn / RTP-over-TCP and support for proxies. This technology comes in part from the libjingle project.How do I access the WebRTC code?
Go to code.google.com/p/webrtc.How can I test the quality of WebRTC components?
We have put an early preview sample application here.Are WebRTC components subject to change?
WebRTC is based on a API that is still under development through efforts at WHATWG, W3C and IETF. We hope to get to a stable API once a few browser vendors have implementations ready for testing. Once the API is stable, our goal will be to offer backwards compatibility and interoperability. The WebRTC API layer will be our main focus for stability and interoperability. The components under it may be modified to improve quality, performance and feature set.How can I implement my own renderer or add my own hooks in the WebRTC Platform?
To do this, please take a look at the external renderer API.WebRTC components are open-source. How do I get the source and contribute code?
Please see Getting started and Contributing bug fixes for more information.
To be a Contributor, do I need to sign any agreements?
Yes, each Contributor must sign and return the Contributor License Agreement.
Do I have to be a programmer to use WebRTC?
Yes, to build WebRTC support into a software application or contribute improvements, programming skills are required. However, usage of the Javascript APIs that call WebRTC in the browsers will only require typical web development skills.Is the WebRTC project owned by Google or is it independent?
WebRTC is an open-source project supported by Google, Mozilla and Opera. The API and underlying protocols are being developed jointly at the W3C and IETF.Are the WebRTC components from Google’s acquisition of Global IP Solutions?
Yes some components, such as VoiceEngine, VideoEngine, NetEQ, AEC, etc all stem from the GIPS acquisition.What codecs are supported in WebRTC?
The currently supported voice codecs are G.711, G.722, iLBC, and iSAC, and VP8 is the supported video codec. The list of supported codecs may change in the future.Please explain how WebRTC is free of charge?
Some software frameworks, voice and video codecs require end-users, distributors and manufacturers to pay patent royalties to use the intellectual property within the software technology and/or codec. Google is not charging royalties for WebRTC and its components including the codecs it supports (VP8 for video and iSAC and iLBC for audio). For more information, see the License pageWhat does this license let me do?
Like most BSD licenses, this license allows you to use the WebRTC code with a minimum of restrictions on your use. You can use the code in proprietary software as well as open source software.Do I need to release the source if I make changes?
No, the license does not require you to release source if you make changes. However, we would love to see any changes you make and possibly incorporate them, so if you want to participate please visit the code page and submit some patches.Why is there a separate patent grant?
In order to decouple patents from copyright, thus preserving the pure BSD nature of the copyright license, the license and the patent grant are separate. This means we are using a standard (BSD) open source copyright license, and the patent grant can exist on its own. This makes WebRTC compatible with all major license scenarios.What if someone gets the code from Google and gives it to me without changes. Do I have a patent grant from Google?
Yes, you still have the right to redistribute and you still have a patent license for Google's patents that cover the code that Google released.What if someone makes a change to the code and gives it to me. Do I have a patent license from Google for that change?
You still have the right to redistribute but no patent license for the changes (if there are any patents covering it). We can't give patent licenses for changes people make after we distribute the code, as we have no way to predict what those changes will be. Other common licenses take the same approach, including the Apache license.What if Google receives or buys a patent that covers the code I receive sometime after I receive the code. Do I have a patent grant for that patent?
Yes, you still have the right to redistribute and you still have a patent license for Google's patents that cover the code that Google released.What if my competitor uses the code and brings patent litigation against me for something unrelated to the code. Does he or she still have a patent license?
Yes, he/she still has the right to redistribute and he/she still has a patent license for Google's patents that cover the code that Google released.Muaz Khan / WebRTC Developer!
References
Single Page Demos
Real Demos
WebRTC SOLUTIONS INDUSTRY NEWS
- infoTECH Spotlight Data Center Excellence Award Application Now Open
09/15/2014
- Leading U.S. Accounting Firm Relies on Aerohive's Next-Generation Wi-Fi Solution for Wireless Rollout Across Offices Nationwide
09/15/2014
- IIA Urges FCC to Rely on Section 706 Authority, Reject Calls for Title II Reclassification of Broadband
09/15/2014
- Freescale base station-on-chip technology powers Airvana OneCell LTE enterprise small cell solution
09/15/2014
- Avnet Embedded, TrueUC Solutions LLC Bring Cloud-Based Unified Communications Solutions to Global Market
09/15/2014
- Mewett & MacDonald team up for renewal of Connect with Care Scheme
09/15/2014
- Market Research Reports, Inc. (www.marketresearchreports.com): VoIP Market in India 2014, New Report Launched
09/15/2014
Other Tutorials
- Are you interested in a "more" simple full-fledged guide? Read this tutorial.
- Are you interested in a "beginners" guide? Read this tutorial.
- You can find many tutorials here: https://www.webrtc-experiment.com/#documentations