说到websocket想比大家不会陌生

作者: 前端  发布:2019-09-17

websocket研究其与话音、图片的力量

2015/12/26 · JavaScript · 3 评论 · websocket

初稿出处: AlloyTeam   

提及websocket想比我们不会不熟悉,假如素不相识的话也没提到,一句话归纳

“WebSocket protocol 是HTML5一种新的商业事务。它达成了浏览器与服务器全双工通讯”

WebSocket相比较守旧那多少个服务器推本事简直好了太多,大家得以挥手向comet和长轮询这个技能说拜拜啦,庆幸大家生存在装有HTML5的时期~

那篇作品大家将分三片段研究websocket

首先是websocket的大面积使用,其次是一心自身创造服务器端websocket,最后是根本介绍利用websocket制作的七个demo,传输图片和在线语音聊天室,let’s go

一、websocket常见用法

那边介绍三种本身感觉大范围的websocket达成……(只顾:本文创立在node上下文碰到

1、socket.io

先给demo

JavaScript

var http = require('http'); var io = require('socket.io'); var server = http.createServer(function(req, res) { res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'}); res.end(); }).listen(8888); var socket =.io.listen(server); socket.sockets.on('connection', function(socket) { socket.emit('xxx', {options}); socket.on('xxx', function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require('http');
var io = require('socket.io');
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {'content-type': 'text/html;charset="utf-8"'});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on('connection', function(socket) {
    socket.emit('xxx', {options});
 
    socket.on('xxx', function(data) {
        // do someting
    });
});

相信明白websocket的同校不恐怕不亮堂socket.io,因为socket.io太知名了,也很棒,它本身对逾期、握手等都做了拍卖。作者测度那也是落实websocket使用最多的艺术。socket.io最最最奇妙的一些正是优雅降级,当浏览器不帮忙websocket时,它会在里边优雅降级为长轮询等,客商和开荒者是不要求关爱具体完毕的,很方便。

然则事情是有两面性的,socket.io因为它的两全也带动了坑的地方,最入眼的就是臃肿,它的卷入也给多少推动了比较多的简报冗余,何况优雅降级这一独到之处,也伴随浏览器标准化的张开稳步失去了宏伟

Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+

在此地不是叱责说socket.io不好,已经被淘汰了,而是一时候大家也得以思索部分另外的贯彻~

 

2、http模块

刚刚说了socket.io臃肿,那未来就来讲说便捷的,首先demo

JavaScript

var http = require(‘http’); var server = http.createServer(); server.on(‘upgrade’, function(req) { console.log(req.headers); }); server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

很轻巧的兑现,其实socket.io内部对websocket也是如此实现的,可是后边帮我们封装了一些handle管理,这里我们也能够协和去丰硕,给出两张socket.io中的源码图

图片 1

图片 2

 

3、ws模块

后边有个例子会用到,这里就提一下,后边具体看~

 

二、本身完结一套server端websocket

凑巧说了二种普及的websocket落成方式,现在大家观念,对于开垦者来讲

websocket相对于古板http数据交互形式以来,扩张了服务器推送的风浪,客户端接收到事件再开展对应管理,开荒起来分化实际不是太大啊

那是因为那贰个模块已经帮我们将数据帧深入分析这边的坑都填好了,第二某些我们将尝试本身成立一套简便的服务器端websocket模块

谢谢次碳酸钴的研究援助,本人在此间那某些只是简单说下,若是对此风乐趣好奇的请百度【web技艺斟酌所】

本身姣好服务器端websocket首要有两点,二个是接纳net模块接受数据流,还只怕有二个是相比官方的帧结构图分析数据,达成这两有个别就已经实现了全方位的最底层专业

率先给贰个客商端发送websocket握手报文的抓包内容

顾客端代码很轻易

JavaScript

ws = new WebSocket("ws://127.0.0.1:8888");

1
ws = new WebSocket("ws://127.0.0.1:8888");

图片 3

劳动器端要对准那些key验证,正是讲key加上四个特定的字符串后做一遍sha1运算,将其结果转变为base64送回去

JavaScript

var crypto = require('crypto'); var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11'; require('net').createServer(function(o) { var key; o.on('data',function(e) { if(!key) { // 获取发送过来的KEY key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]; // 连接上WS这么些字符串,并做贰遍sha1运算,最终转变到Base64 key = crypto.createHash('sha1').update(key+WS).digest('base64'); // 输出再次回到给顾客端的数目,那个字段都以必需的 o.write('HTTP/1.1 101 Switching Protocolsrn'); o.write('Upgrade: websocketrn'); o.write('Connection: Upgradern'); // 这么些字段带上服务器处理后的KEY o.write('Sec-WebSocket-Accept: '+key+'rn'); // 输出空行,使HTTP头甘休 o.write('rn'); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require('crypto');
var WS = '258EAFA5-E914-47DA-95CA-C5AB0DC85B11';
 
require('net').createServer(function(o) {
var key;
o.on('data',function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash('sha1').update(key+WS).digest('base64');
// 输出返回给客户端的数据,这些字段都是必须的
o.write('HTTP/1.1 101 Switching Protocolsrn');
o.write('Upgrade: websocketrn');
o.write('Connection: Upgradern');
// 这个字段带上服务器处理后的KEY
o.write('Sec-WebSocket-Accept: '+key+'rn');
// 输出空行,使HTTP头结束
o.write('rn');
}
});
}).listen(8888);

那样握手部分就已经完成了,后边正是数据帧分析与转换的活了

先看下官方提供的帧结构暗指图

图片 4

总结介绍下

FIN为是还是不是得了的标示

PAJEROSV为留住空间,0

opcode标记数据类型,是不是分片,是不是二进制剖判,心跳包等等

付出一张opcode对应图

图片 5

MASK是不是选择掩码

Payload len和前边extend payload length表示数据长度,这么些是最麻烦的

PayloadLen唯有7位,换到无符号整型的话独有0到127的取值,这么小的数值当然不可能描述比较大的多寡,由此规定当数码长度小于或等于125时候它才作为数据长度的描述,假使这一个值为126,则时候背后的八个字节来囤积数据长度,要是为127则用后边多少个字节来存款和储蓄数据长度

Masking-key掩码

上面贴出深入分析数据帧的代码

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i] >> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7, PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) { frame.PayloadLength = (e[i++] << 8) + e[i++]; } if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength; j++) { s.push(e[i+j] ^ frame.MaskingKey[j%4]); } } else { s = e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

接下来是变化数据帧的

JavaScript

function encodeDataFrame(e) { var s = [], o = new Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) + e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都以鲁人持竿帧结构暗中表示图上的去处理,在这里不细讲,文章主要在下部分,若是对那块感兴趣的话能够运动web工夫研讨所~

 

三、websocket传输图片和websocket语音聊天室

正片环节到了,那篇小说最注重的要么突显一下websocket的部分运用境况

1、传输图片

我们先钻探传输图片的步子是怎么着,首先服务器收到到客商端必要,然后读取图片文件,将二进制数据转载给客商端,客户端怎么着处理?当然是利用FileReader对象了

先给客户端代码

JavaScript

var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888"); ws.onopen = function(){ console.log("握手成功"); }; ws.onmessage = function(e) { var reader = new FileReader(); reader.onload = function(event) { var contents = event.target.result; var a = new Image(); a.src = contents; document.body.appendChild(a); } reader.readAsDataUENVISIONL(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

接收到新闻,然后readAsDataUEscortL,直接将图片base64加多到页面中

转到服务器端代码

JavaScript

fs.readdir("skyland", function(err, files) { if(err) { throw err; } for(var i = 0; i < files.length; i++) { fs.readFile('skyland/' + files[i], function(err, data) { if(err) { throw err; } o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) { var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2); if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile('skyland/' + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) + 2)这一句,这里相当直接把opcode写死了为2,对于Binary Frame,那样客商端接收到数量是不会尝试进行toString的,不然会报错~

代码很简短,在此间向大家分享一下websocket传输图片的进程怎样

测量检验相当多张图纸,总共8.24M

常常静态财富服务器须要20s左右(服务器较远)

cdn需要2.8s左右

那我们的websocket格局呢??!

答案是千篇一律须求20s左右,是或不是很失望……速度正是慢在传输上,而不是服务器读取图片,本机上一致的图形财富,1s左右能够成功……那样看来数据流也心有余而力不足冲破距离的限定升高传输速度

上边大家来探访websocket的另一个用法~

 

用websocket搭建语音聊天室

先来整治一下口音聊天室的功能

客商步入频道随后从迈克风输入音频,然后发送给后台转载给频道里面包车型大巴其余人,别的人接收到音信进行广播

看起来困难在四个地点,第二个是音频的输入,第二是接受到多少流举办播报

先说音频的输入,这里运用了HTML5的getUserMedia方法,可是注意了,以此艺术上线是有尖鼻咀的,最后说,先贴代码

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); recorder = rec; }) }

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

首先个参数是{audio: true},只启用音频,然后创造了贰个SRecorder对象,后续的操作基本上都在这一个指标上拓宽。此时只要代码运营在地面包车型大巴话浏览器应该晋升您是否启用Mike风输入,鲜明之后就运行了

接下去大家看下SRecorder构造函数是什么,给出重要的部分

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪oContext是一个旋律上下文对象,有做过声音过滤管理的同桌应该明了“一段音频抵达扬声器进行广播从前,半路对其举办阻拦,于是大家就获得了点子数据了,那几个拦截专门的工作是由window.奥迪(Audi)oContext来做的,大家具有对旋律的操作都基于这一个目的”,大家能够通过奥迪oContext创制不一样的奥迪oNode节点,然后增多滤镜播放特其他响动

录音原理同样,大家也须求走奥迪oContext,不过多了一步对Mike风音频输入的接受上,并非像在此之前管理音频一下用ajax央求音频的ArrayBuffer对象再decode,迈克风的接受供给用到createMediaStreamSource方法,注意这几个参数就是getUserMedia方法第二个参数的参数

再则createScriptProcessor方法,它官方的分解是:

Creates a ScriptProcessorNode, which can be used for direct audio processing via JavaScript.

——————

回顾下便是以此格局是应用JavaScript去管理音频搜聚操作

好不轻松到点子采撷了!胜利就在眼下!

接下去让大家把话筒的输入和旋律搜集相连起来

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方解释如下

The destination property of the AudioContext interface returns an AudioDestinationNoderepresenting the final destination of all audio in the context.

——————

context.destination重回代表在境况中的音频的尾声指标地。

好,到了那儿,大家还要求三个监听音频搜罗的风浪

JavaScript

recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是一个指标,这几个是在网络找的,小编就加了叁个clear方法因为前边会用到,首要有丰硕encodeWAV方法十分赞,外人举办了频频的节奏压缩和优化,这么些最后会伴随完整的代码一同贴出来

那会儿一切客商步向频道随后从迈克风输入音频环节就曾经成功啦,上面就该是向劳动器端发送音频流,稍微有一点蛋疼的来了,刚才大家说了,websocket通过opcode分歧能够象征回去的多少是文本还是二进制数据,而我们onaudioprocess中input进去的是数组,最后播放音响须要的是Blob,{type: ‘audio/wav’}的靶子,这样大家就务要求在发送从前将数组转变到WAV的Blob,此时就用到了下边说的encodeWAV方法

服务器如同很简短,只要转载就行了

地面测验确实可以,不过天坑来了!将次第跑在服务器上时候调用getUserMedia方法提醒作者必需在二个安然无事的意况,也正是索要https,那表示ws也非得换来wss……因此服务器代码就从未采纳我们温馨包裹的拉手、分析和编码了,代码如下

JavaScript

var https = require('https'); var fs = require('fs'); var ws = require('ws'); var userMap = Object.create(null); var options = { key: fs.readFileSync('./privatekey.pem'), cert: fs.readFileSync('./certificate.pem') }; var server = https.createServer(options, function(req, res) { res.writeHead({ 'Content-Type' : 'text/html' }); fs.readFile('./testaudio.html', function(err, data) { if(err) { return ; } res.end(data); }); }); var wss = new ws.Server({server: server}); wss.on('connection', function(o) { o.on('message', function(message) { if(message.indexOf('user') === 0) { var user = message.split(':')[1]; userMap[user] = o; } else { for(var u in userMap) { userMap[u].send(message); } } }); }); server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require('https');
var fs = require('fs');
var ws = require('ws');
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync('./privatekey.pem'),
    cert: fs.readFileSync('./certificate.pem')
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        'Content-Type' : 'text/html'
    });
 
    fs.readFile('./testaudio.html', function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on('connection', function(o) {
    o.on('message', function(message) {
if(message.indexOf('user') === 0) {
    var user = message.split(':')[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码依旧异常的粗略的,使用https模块,然后用了始于说的ws模块,userMap是效仿的频道,只兑现转载的主导作用

行使ws模块是因为它至极https完成wss实在是太低价了,和逻辑代码0冲突

https的搭建在这里就不提了,主就算索要私钥、CSLX570证书具名和证件文件,感兴趣的同窗能够了然下(但是不领会的话在现网情形也用持续getUserMedia……)

上边是欧洲经济共同体的前端代码

JavaScript

var a = document.getElementById('a'); var b = document.getElementById('b'); var c = document.getElementById('c'); navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia; var gRecorder = null; var audio = document.querySelector('audio'); var door = false; var ws = null; b.onclick = function() { if(a.value === '') { alert('请输入顾客名'); return false; } if(!navigator.getUserMedia) { alert('抱歉您的装置无波兰语音聊天'); return false; } SRecorder.get(function (rec) { gRecorder = rec; }); ws = new WebSocket("wss://x.x.x.x:8888"); ws.onopen = function() { console.log('握手成功'); ws.send('user:' + a.value); }; ws.onmessage = function(e) { receive(e.data); }; document.onkeydown = function(e) { if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } } }; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) { ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door = false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var SRecorder = function(stream) { config = {}; config.sampleBits = config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100 / 6); var context = new 奥迪(Audi)oContext(); var audioInput = context.createMediaStreamSource(stream); var recorder = context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0 //录音文件长度 , buffer: [] //录音缓存 , input萨姆pleRate: context.sampleRate //输入采集样品率 , inputSampleBits: 16 //输入采集样品数位 8, 16 , output萨姆pleRate: config.sampleRate //输出采集样品率 , oututSampleBits: config.sampleBits //输出采集样品数位 8, 16 , clear: function() { this.buffer = []; this.size = 0; } , input: function (data) { this.buffer.push(new Float32Array(data)); this.size += data.length; } , compress: function () { //合并压缩 //合併 var data = new Float32Array(this.size); var offset = 0; for (var i = 0; i < this.buffer.length; i++) { data.set(this.buffer[i], offset); offset += this.buffer[i].length; } //压缩 var compression = parseInt(this.inputSampleRate / this.outputSampleRate); var length = data.length / compression; var result = new Float32Array(length); var index = 0, j = 0; while (index < length) { result[index] = data[j]; j += compression; index++; } return result; } , encodeWAV: function () { var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits); var bytes = this.compress(); var dataLength = bytes.length * (sampleBits / 8); var buffer = new ArrayBuffer(44 + dataLength); var data = new DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var writeString = function (str) { for (var i = 0; i < str.length; i++) { data.setUint8(offset + i, str.charCodeAt(i)); } }; // 财富交流文件标记符 writeString('牧马人IFF'); offset += 4; // 下个地点起头到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 + dataLength, true); offset += 4; // WAV文件申明 writeString('WAVE'); offset += 4; // 波形格式标识 writeString('fmt '); offset += 4; // 过滤字节,一般为 0x10 = 16 data.setUint32(offset, 16, true); offset += 4; // 格式种类 (PCM情势采集样品数据) data.setUint16(offset, 1, true); offset += 2; // 通道数 data.setUint16(offset, channelCount, true); offset += 2; // 采样率,每秒样本数,表示每种通道的播音速度 data.setUint32(offset, sampleRate, true); offset += 4; // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8 data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4; // 快数据调度数 采集样品三遍占用字节数 单声道×每样本的数据位数/8 data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2; // 每样本数量位数 data.setUint16(offset, sampleBits, true); offset += 2; // 数据标记符 writeString('data'); offset += 4; // 采集样品数据总量,即数据总大小-44 data.setUint32(offset, dataLength, true); offset += 4; // 写入采样数据 if (sampleBits === 8) { for (var i = 0; i < bytes.length; i++, offset++) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s < 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val + 32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i < bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1, bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true); } } return new Blob([data], { type: 'audio/wav' }); } }; this.start = function () { audioInput.connect(recorder); recorder.connect(context.destination); } this.stop = function () { recorder.disconnect(); } this.getBlob = function () { return audioData.encodeWAV(); } this.clear = function() { audioData.clear(); } recorder.onaudioprocess = function (e) { audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get = function (callback) { if (callback) { if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true }, function (stream) { var rec = new SRecorder(stream); callback(rec); }) } } } function receive(e) { audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById('a');
var b = document.getElementById('b');
var c = document.getElementById('c');
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector('audio');
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === '') {
        alert('请输入用户名');
        return false;
    }
    if(!navigator.getUserMedia) {
        alert('抱歉您的设备无法语音聊天');
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log('握手成功');
        ws.send('user:' + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString('RIFF'); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString('WAVE'); offset += 4;
            // 波形格式标志
            writeString('fmt '); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString('data'); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: 'audio/wav' });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

谐和有品味不开关实时对讲,通过setInterval发送,但意识杂音有一些重,效果倒霉,这些供给encodeWAV再一层的卷入,多去除意况杂音的效能,本人挑选了一发便利的开关说话的方式

 

这篇作品里第一展望了websocket的今后,然后依据正规大家同心合力尝试深入分析和变化数据帧,对websocket有了更加深一步的打听

终极经过多少个demo看到了websocket的潜能,关于语音聊天室的demo涉及的较广,没有接触过奥迪(Audi)oContext对象的同室最棒先精晓下奥迪(Audi)oContext

小聊到此地就终止啦~有哪些主见和难点应接大家建议来一同评论索求~

 

1 赞 11 收藏 3 评论

图片 6

本文由9159.com发布于前端,转载请注明出处:说到websocket想比大家不会陌生

关键词:

上一篇:没有了
下一篇:没有了