Audio/Video calling over internet using Socket.IO


Active Member
Licensed User
Longtime User
Hello everyone,

I was trying to transfer object data (bytes) using Socket.IO. While doing that I was able to create an Audio/Video calling over the internet. It's working smoothly. I was thinking to post that example here in the forum but that is a really simple example.

Should I post a fully functional app and a signaling server? Or just a simple example will work?

What do you think?


Licensed User
Longtime User


Well-Known Member
Licensed User
Should I post a fully functional app and a signaling server? Or just a simple example will work?
I think the functional app BUT also a post with a simple description of what you did and found out would be interesting while we wait for the more full function stuff......đź––đź‘Ť
well, maybe just a simple example as well so we can see the concept in action. . . .

Have a look at what people have added into this thread . . . . Lockdown activity posts


Well-Known Member
Licensed User

Firefox doesn't like this site :

Last edited:


Active Member
Licensed User
Longtime User

After lots of attempts (almost for a year) finally, I was able to transmit live microphone PCM buffer over the internet without any distortion using Socket.IO from,
  1. Android to Android
  2. Android to iOS
  3. iOS to Android
  4. iOS to iOS
It was very difficult to match the PCM buffer data format of both the platforms. As there is no AudioSteamer available in B4i like B4A I had to study the iOS AVAudioEngine to tap the microphone and get the live data from the microphone. But the main problem was the format because iOS microphone output format is Float32 and the Android microphone output format is Int16.

If you change the format of input node in iOS then the audio feed will be choppy because you cannot change the frame capacity. So the buffer will be cropped or will have silence bytes after the actual data. To get rid of this issue I had to use iOS AVAudioConverter to convert the Float32 data to Int16 scaled data.

Why scaled?
Because I noticed iOS 88.2KHz 16bit PCM buffer pitch actually matched with the Android 44.1KHz 16bit PCM buffer pitch. If you don't scale it 2 times then the voice pitch will be changed. Don't know why!!

Here is the B4i audio recording and format conversion code,
Sub StartRecording
    ''check if its already running or not
    If AudioEngine.IsInitialized Then
        If AudioEngine.GetField("isRunning").AsBoolean = True Then
        End If
    End If
    ''Initialize AudioEngine
    AudioEngine = AudioEngine.RunMethod("alloc",Null).RunMethod("init",Null)
    ''Intall a tap on the microphone, the buffer will be available in the AudioNodeTap_Event
    Dim AudioInputNode As NativeObject = AudioEngine.GetField("inputNode")
    ''Initailize the output format (iOS default sample rate is 44100 so we have multiply it by 2)
    Dim toformat As NativeObject
    ''Initialize the AudioConverter
    ''Run the AudioEngine
    If AudioEngine.RunMethod("startAndReturnError:",Null).AsBoolean = False Then
        Msgbox("Cannot start audio engine","")
    End If
End Sub

Sub AudioNodeTap_Event(args() As Object)
    Dim no As NativeObject = Me
    ''The first object is the uncompressed raw PCM buffer
    ''The Objective C code is converting the Float32 buffer to Int16
    ''And returning the buffer bytes as NSData
    Dim data As Object = no.RunMethod("PCMtoNSData::",Array(args(0),AudioConverter))
    ''We have to convert it to B4IArray to be able to send over internet
    Dim buffer() As Byte = no.NSDataToArray(data)
    ''Reseting the AudioConverter so that it can process the next buffer
    ''Send the byte array to the server
End Sub

Sub StopRecording
    ''If its runing then first removing the microphone tap and then stopping the AudioEngine
    If AudioEngine.IsInitialized And AudioEngine.GetField("isRunning").AsBoolean Then
        Dim AudioInputNode As NativeObject = AudioEngine.GetField("inputNode")
    End If
End Sub

#if objc
-(NSData*) PCMtoNSData:(AVAudioPCMBuffer*) buffer :(AVAudioConverter*) converter{
    //Initalizing a Int16 empty buffer with the scaled frame capacity
    AVAudioPCMBuffer *outputBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:converter.outputFormat frameCapacity:(converter.outputFormat.sampleRate * buffer.frameLength / buffer.format.sampleRate)];
    outputBuffer.frameLength = outputBuffer.frameCapacity;
    //Converting the buffer
    AVAudioConverterOutputStatus oStatus = [converter convertToBuffer:outputBuffer error:nil withInputFromBlock:^(AVAudioPacketCount inNumberOfPackets, AVAudioConverterInputStatus *outStatus){
        *outStatus = AVAudioConverterInputStatus_HaveData;
        return buffer;
    //Storing inner raw buffer byte array to NSData
    NSData *values = [[NSData alloc] initWithBytes:outputBuffer.int16ChannelData[0] length:(outputBuffer.frameCapacity * outputBuffer.format.streamDescription->mBytesPerFrame)];
    return values;
#End If

I will post the whole project when it's done.


Active Member
Licensed User
Longtime User