Hello everyone
The below code is used to record the In-App screen, the usage of this code is really wide because you could use it for video chatting app or in-app video recording or just a simple screen shot
Dependencies :
1- ReplayKit
2- UIKit
3- CoreImage
Code :
1- Inline Objective C
2- Then to call the Inline Objective C :
3- Once the inline objective c captures a frame by frame it will call a sub "some_sub" and in this sub we will handle the captured frame :
*This is my first IOS tutorial so i really hope it was clear enough
Thank you,
Saif
The below code is used to record the In-App screen, the usage of this code is really wide because you could use it for video chatting app or in-app video recording or just a simple screen shot
Dependencies :
1- ReplayKit
2- UIKit
3- CoreImage
Code :
1- Inline Objective C
B4i:
#If OBJC
#import <ReplayKit/ReplayKit.h>
#import <UIKit/UIKit.h>
#import <CoreImage/CoreImage.h>
- (void)StartRecording
{
RPScreenRecorder* screenRecorder = RPScreenRecorder.sharedRecorder;
//screenRecorder.delegate = self;
AVAssetWriter *assetWriter;
[screenRecorder startCaptureWithHandler:^(CMSampleBufferRef _Nonnull sampleBuffer, RPSampleBufferType bufferType, NSError * _Nullable error) {
if (CMSampleBufferDataIsReady(sampleBuffer)) {
if (assetWriter.status == AVAssetWriterStatusUnknown) {
[assetWriter startWriting];
[assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
}
if (assetWriter.status == AVAssetWriterStatusFailed) {
NSLog(@"An error occured.");
return;
}
if (bufferType == RPSampleBufferTypeVideo) {
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer))];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
[self.bi raiseEvent:nil event:@"some_sub:" params:@[(image)]];
// if (assetWriterInput.isReadyForMoreMediaData) {
// [assetWriterInput appendSampleBuffer:sampleBuffer];
//}
}
}
} completionHandler:^(NSError * _Nullable error) {
if (!error) {
NSLog(@"Recording started successfully.");
}
}];
}
#End If
2- Then to call the Inline Objective C :
B4i:
Dim no As NativeObject = Me
no.RunMethod("StartRecording", Null)
3- Once the inline objective c captures a frame by frame it will call a sub "some_sub" and in this sub we will handle the captured frame :
B4i:
Sub Some_Sub(x As Object)
Log("here")
If x <> Null Then
Dim imgtest As Bitmap
imgtest = x '-- the frame is now converted to an image
End If
End Sub
*This is my first IOS tutorial so i really hope it was clear enough
Thank you,
Saif
Last edited: