B4J Code Snippet If you're looking to hide the Command Prompt (CMD) window when running a script or program e.g. FFmpeg

If you're looking to hide the Command Prompt (CMD) window when running a script or program,
there are several approaches depending on your needs.

I wanted to hide the Command Prompt (CMD) window and the FFmpeg.exe program window at the same time.
Here is my solution that was implemented in Windows 11 version, using a script (Script_converting.cmd) & (Script_converting.vbs):
Main - Code Module::
Dim media  As String = "0000.media" 'input file
Dim mp4    As String = "0000.mp4"     'output file
Dim script As String = File.Combine(File.DirApp, "\e-Scripts\Script_converting.vbs")

If Not(File.Exists(File.DirApp, "\e-Scripts\Script_converting.cmd")) Or Not(File.Exists(File.DirApp, "\e-Scripts\Script_converting.vbs")) Then
    If Not(File.Exists(File.DirApp, "e-Scripts")) Then File.MakeDir(File.DirApp, "e-Scripts")
    If Not(File.Exists(File.DirApp, "\e-Scripts\Script_converting.cmd")) Then
        Wait For (File.CopyAsync(File.DirAssets, "Script_converting.cmd", File.DirApp, "\e-Scripts\Script_converting.cmd")) Complete (Success As Boolean)
    End If
    If Not(File.Exists(File.DirApp, "\e-Scripts\Script_converting.vbs")) Then
        Wait For (File.CopyAsync(File.DirAssets, "Script_converting.vbs", File.DirApp, "\e-Scripts\Script_converting.vbs")) Complete (Success As Boolean)
    End If
End If

start_cmd(script & " " & File.Combine(File.DirApp, "\e-Scripts") & " 3 " & media & " " & mp4) 'Windows command script (with command interpreter 'cmd.exe')

Private Sub start_cmd(fScript As String)
    shl.Initialize("shl", "cmd.exe", Array As String("/c", fScript)) '(use a vbscript called ".vbs" only to run the batch file with hidden dos screen)
    shl.WorkingDirectory = "D:\TMP\SessionData" 'Target file conversion directory
    shl.RunSynchronous(-1)
End Sub

Private Sub shl_ProcessCompleted (Success As Boolean, ExitCode As Int, StdOut As String, StdErr As String)
    If Success = True And ExitCode = 0 Then
        #If Debug
        Log("SHL Process, Success") 'The Windows command script has completed.
        #End If
    End If
End Sub

Overview of details:
In the file attached here (Script_converting.cmd):
1. I used the flag /B
2. I used the parameter /affinity 3 - this means that it will use CPU0 i CPU1
3. I used six (threads fffmpeg): -threads 6
4. I used the parameter -nostats -nostdin - I missed creating statistics

In the file attached here (Script_converting.vbs):
1. WScript.Arguments(0) - The parameter used determines the directory for the file Script_converting.cmd
2. WScript.Arguments(1) - The parameter used determines the jump to the label IF "%1"=="3" GOTO cnverting_med (see: Script_converting.cmd)
3. WScript.Arguments(2) - The parameter used determines the directory for the file 0000.media
4. WScript.Arguments(3) - The parameter used determines the directory for the file 0000.mp4
5. I used the parameter to hide the FFmpeg.exe, window as well "0, False"

> See file: Script_converting.cmd
:cnverting_med
C:\Windows\System32\cmd.exe /c start /low /B /affinity 3 D:\ffmpeg\bin\ffmpeg.exe -i %2 -s 960x540 -c:v libx264 -b:v 1260k -r 24 -x264opts keyint=48:min-keyint=48:no-scenecut -profile:v main -preset medium -movflags +faststart -nostats -nostdin -threads 6 %3

> See file: Script_converting.vbs
CreateObject("Wscript.Shell").Run WScript.Arguments(0) & "\Script_converting.cmd " & WScript.Arguments(1) & " " & WScript.Arguments(2) & " " & WScript.Arguments(3), 0, False

You can find more information HERE:

PS. I am worried about one thing, Microsoft mentioned the withdrawal from VScript in 5 years, whether it is true or not, all applications supporting this tool will be re -written 🤪
 

Attachments

  • Script_converting.zip
    1.1 KB · Views: 6
Last edited:

Magma

Expert
Licensed User
Longtime User

T201016

Active Member
Licensed User
Longtime User
nice !

check: https://www.b4x.com/android/forum/threads/recdesk-screen-recorder-video-capture-grabber.143335/


RecDesk is Using ffmpeg... I am sure will be useful with your knowledge at ffmpeg...

by the way: I am searching somehow using video chunks of ffmpeg... or pipepline.. in other words I want to split it and send via web or mqtt and stream it... do you know any way ?
I want to understand well, do you want to divide, for example, one video file into selected fragments using FFMPEG? If so, I think I still have such a source of code, but I have to look well, it will take a while.
 

Magma

Expert
Licensed User
Longtime User
I want to understand well, do you want to divide, for example, one video file into selected fragments using FFMPEG? If so, I think I still have such a source of code, but I have to look well, it will take a while.
..
I want to record from a source... and the same time... send some "ms" / chuncks / frames (i don't know what exactly) to other ip (may be using mqtt) and from the other side playing those somehow... until the source stop sending

Stream with my way..

(ps: the communication way is not a problem... the problem is how to get chunks ... and how to play them)
 

T201016

Active Member
Licensed User
Longtime User
..
I want to record from a source... and the same time... send some "ms" / chuncks / frames (i don't know what exactly) to other ip (may be using mqtt) and from the other side playing those somehow... until the source stop sending

Stream with my way..

(ps: the communication way is not a problem... the problem is how to get chunks ... and how to play them)

I imagine that you have a video file that you want to convert to MP4 using FFmpeg, and you do not want to save the output data to the disk, you rather want to transfer it
directly to another program. Perhaps the file is large and you want to send it elsewhere without waiting for the end of the entire conversion or without saving it to the disk.
Something simple, like the following of the command, unfortunately does not work:

FFmpeg -f dhav -i 'http://192.168.2.100/somefile.dav' -c copy -f mp4

The standard MP4 file structure consists of data fragments called "atoms".

Two of them are important:
1. mdat - type containing multimedia paths (such as audio or video);
2. moov - type containing additional metadata (such as information about the duration of the film, codecs, etc.).

It is important that each atom must determine its size in the header. In mdat, however, the size is not known until completely discharged.
So FFmpeg begins to write a mate atom with a size set to zero, then fills it back and finally leads the moov atom.

However, you can't look back on the pipe - that's what I find the problem.

An alternative to the standard MP4 structure is "fragmentary MP4".
His moov atom is at the beginning of the file and may lack some fields (for example, the duration of the film).
The songs are divided into fragments, each of which creates a pair of moof and mdat atoms (moof atom contains metadata on a mdat fragment).
These fragments are small enough that FFmpeg can keep them in memory, wait for them to be complete, released moof and mdat atoms and repeat for the next.

In the case of fragmentary MP4, there is no need to search in the output stream,
To create such a file, FFmpeg must receive one additional option:

FFmpeg -f dhav -i 'http://192.168.2.100/somefile.dav' -c copy -movflags 'frag_keyframe+empty_moov' -f mp4

The movflags parameter specifies that you want to create a fragmentary MP4 file in which each key cage
It launches a new fragment, moov atom is reset, followed by moofs and mdats fragmentation.

In practice, fragmentary MP4 files are less compatible.
For example, the built-in Windows video player is confused with the empty moov atom reports zero duration and the strap
search is useless, although interestingly plays files.

Returning to your question in the thread:

The first thing you should know about FFmpeg is to use a copy of the codec such as -c:v copy then you can't change the movie,
e.g. segment time, size or anything else we need to use FFmpeg to generate HLS playback list formatted as fmp4.
When playing the segmented MP4 file, an init file will always be available and then video segments.

FFmpeg -i 0000.mp4 -f hls -hls_segment_type fmp4 -c:v copy playlist.m3u8

It will generate a playlist file containing a file list, init.mp4 file and video segments.
Once you have this list and files in the appropriate format, you can use any selected method,
To deliver them to a multimedia source player.

Remember that if you want to change the size of the segments, you need to re-code the video with a codec such as libx264.
In addition, if you want to see available options for mukser HLS, run FFmpeg -h muxer=hls and give you some details.

The FFmpeg CMD command that I gave you generates the defragmented MP4 files that are intended to play in the extension of the multimedia source.
If you connect any single fragment of the m4 file with init.mp4, you will receive a complete MP4 file that can be reproduced by a multimedia source.
The m4s file simply contains mdat data, i.e. actual video cages. init.mp4 file contains header information that contains such elements,
like width, height, codec, ... etc.

When using the extension of the multimedia source, you first send the init.mp4 file
for a multimedia source player, followed by all fragments of m4s.

In this way you manufacture segments of complete MP4 files instead of fragments of MP4 files.

Looking for a way to get an applemented MP4 file, I found the only way that has worked for me so far:

ffmpeg -y -i 0000.mp4 -c:a libfdk_aac -ac 2 -ab 128k -c:v libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -ss 0 -t 30 v0.mp4 // get the first 30 seconds of the video
./mp4fragment v0.mp4 fragmented-v0.mp4

or

ffmpeg -i ./myvideo.mp4 -c:a libfdk_aac -ac 2 -ab 128k -c:v libx264 -x264opts keyint=24:min-keyint=24:no-scenecut -f segment -segment_time 10 ./video/%01d.mp4

1. Division of the film at regular intervals:
- In general, we divide the film into many parts to extract smaller fragments of this film.

At the beginning, we will divide the film into several parts, each of which will last exactly the same amount of time:

ffmpeg -i 0000.mp4 -acodec copy -f segment -segment_time 10 -vcodec copy -reset_timestamps 1 -map 0 output_time_%d.mp4

This command downloads the input video file, 0000.mp4, and divides it into 10 parts.

We will understand every option:

-i specifies the input file, in our case, 0000.mp4
-acodec copy sets the audio codec for the output to copy, which means the audio stream will be copied without re-encoding
-f segment sets the format to segment
-segment_time 10 specifies the duration of each segment to 10 seconds
-vcodec copy sets the video codec for the output to copy, which means the video stream will be copied without re-encoding
-reset_timestamps 1 resets timestamps for each segment and creates segments with continuous timestamps
-map 0 maps all the streams from input to the output
output_time_%d.mp4 defines the naming pattern for the output files, where %d in the naming pattern is a placeholder for a numeric index

As a result of this command, we receive three (3) input video segments, 0000.mp4,
With a approximate duration of 10 seconds each.

However, it often happens that not all resulting segments last exactly 10 seconds.
The reason for this is the complex nature of the video file. Many factors are at stake,
Including key frames, audio-video synchronization and variable number of frames per second.

Therefore, re -encoding the video before segmentation can be used as a better approach
enabling more accurate control of the duration of the segment.

2. Division by re-coding:
- video coding may result in an increase or decrease in size,
Depending on the quality of the input video and libraries, the codecs used for re-coding.

To start with, we will encode the video again with FFmpeg, and then divide them into many parts:

ffmpeg -i 0000.mp4 -c:v libx264 -c:a aac -strict experimental -b:a 192k -force_key_frames "expr:gte(t,n_forced*10)" -f segment -segment_time 10 -reset_timestamps 1 -map 0 output_part_%d.mp4

First of all, this command re-encodes the input video input using audio and video codecs.
Secondly, it forces cages of key 10 second intervals.
Finally, he divides the video into parts, each of which lasts for 10 seconds.

We will understand the new parameters used in this command:

-c:v libx264 sets the video codec to libx264
-c:a aac sets the audio codec to AAC
-strict experimental is needed when using the AAC audio codec
-b:a 192k sets the audio bitrate to 192 kbps
-force_key_frames forces keyframes at regular intervals set by the following expression
"expr:gte(t,n_forced*10)" forces a keyframe every 10 seconds (time t)

3. Video division at regular intervals of key frames:
- Like sharing at regular intervals, sharing at regular intervals of key frames can be difficult.
So, we will divide the video file, coding it again:

ffmpeg -i 0000.mp4 -c:v libx264 -c:a aac -strict experimental -b:a 192k -force_key_frames "expr:gte(n,200*trunc(t/200))" -f segment -segment_frames $(seq 200 200 600 | paste -sd "," -) -reset_timestamps 1 -map 0 output_partf_%d.mp4

This command is effectively divided by the input video file, 0000.mp4, in each 200 key frames.

Parameter -force_key_frames "expr:gte(n,200*trunc(t/200))" It ensures that the key cage is forced every 200 frames.

Let's take a closer look at:

gte(n,200*trunc(t/200)) The expression assesses whether the current frame number N is greater or equal to the calculated key cage index
200 * trunc(t/200) defines the key cage index, where T represents the time in seconds.

With the -segment_frames option, we can specify the frame numbers at which to split the video.
In the above command, we used seq to generate a sequence of numbers from 200 to 600
in increments of 200 and the paste command to join the numbers with a comma separator.

We can also manually specify frame numbers (i.e., 200, 400, 600) instead of generating them using SEQ in the above order, achieving the same result.

In addition, we can force the key frames using any cage index, which is a factor in the target interval of key frames, in this case, 200.

4. Extracting the duration of the film:
- Because the duration of the video during chronological time may be inconsistent, we can use both seconds and cages as video measurements.

4.1. Chronological units:
- First, let's go find the duration from one of the first parts, we divided with help ffprobe:

ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 output_part_1.mp4 10.000000

-v error sets the verbosity level to error
-show_entries format=duration specifies the information to be displayed, in our case, the duration of the multimedia file
-of default=noprint_wrappers=1:nokey=1 specifies the output format
output_part_1.mp4 is the name of the input multimedia file

Similarly, we can check the duration of each of the split parts with ffprobe.

4.2. Frame:
- Now, let’s find the total number of frames in one of the keyframe-split videos using ffprobe:

ffprobe -v error -select_streams v:0 -show_entries stream=nb_frames -of default=nokey=1:noprint_wrappers=1 output_partf_0.mp4 200

So the first divided part has 200 frames. In this command, -select_streams v:0 defines and selects a video stream with an index 0.
The -show_entries stream=nb_frames The option specifies the information to be displayed, i.e. the total number of frames available in the input video file.

Similarly, we can use ffprobe to find the total number of frames in each of the divided films by changing the input file name.


In this description above, I tried to tell you how to divide the film into many parts using FFmpeg.
I am also asking for your understanding in translation into my weak English 🤪


I have that this extensive material will help you at least in a small part in your task.
 
Last edited:

T201016

Active Member
Licensed User
Longtime User
mp4frag:

This is a parser reading the data sent from FFmpeg containing the defragmented MP4 file and dividing it into the initialization segment and multimedia segments.
Designed for streaming live image transmission from CCTV cameras.

To create a compatible, defragmented MP4 format, similar to examples from the real world,
Use the correct output arguments with FFmpeg:

ffmpeg -loglevel quiet -rtsp_transport tcp -i rtsp://192.168.1.21:554/user=admin_password=pass_channel=0_stream=1.sdp?real_stream -reset_timestamps 1 -an -c:v copy -f mp4 -movflags +frag_every_frame+empty_moov+default_base_moof -min_frag_duration 500000 pipe:1
ffmpeg -loglevel quiet -rtsp_transport tcp -i rtsp://192.168.1.18:554/user=admin&password=pass&channel=1&stream=1.sdp -reset_timestamps 1 -an -c:v copy -f mp4 -movflags +frag_keyframe+empty_moov+default_base_moof pipe:1
 

T201016

Active Member
Licensed User
Longtime User
The latest version of FFmpeg can also derive files in line with HLS.
Considering the video as input data, he will divide them into segments and create a playlist for us.

Here is the equivalent of the above command using FFmpeg:

ffmpeg -y -i sample.mov -codec copy -bsf h264_mp4toannexb -map 0 -f segment -segment_time 10 -segment_format mpegts -segment_list "/Library/Documents/vod/prog_index.m3u8" -segment_list_type m3u8 "/Library/Documents/vod/fileSequence%d.ts"
 

Magma

Expert
Licensed User
Longtime User
Just wow... a full tutorial on how exactly ffmpeg is working..

Many thanks for your answer...

Need to find time now... for experiments understanding your guides because my English not so good too..

But I think you understand me..
 

T201016

Active Member
Licensed User
Longtime User
Just wow... a full tutorial on how exactly ffmpeg is working..

Many thanks for your answer...

Need to find time now... for experiments understanding your guides because my English not so good too..

But I think you understand me..
FFMPEG is significantly very extensive, I think that sometimes it would be too much to understand it to the end. For now, I fail to do it fully
 
Top