Hey everyone,
I don't know if this here is the right place to post this topic as it's partially related to Basic4Android and Java and a library development.
So I found an intresting piece of code here: CameraView.java - camdroiduni - stream from Android camera to pc - Google Project Hosting
And it's about live streaming from your device's camera to your browser.
But I want to know how the code actually works.
These are the things I want to understand:
1) How do they stream to the webbrowser. I understand that they send a index.html file to the ip adress of the device (on wifi) and that file reloads the page every second. But how do they send the index.html file to the desired ip address with sockets?
2) Here they mention they are using video, but I am still convinced they take pictures and send them as I don't see the mediarecorder anywhere.
Now my question is how they keep sending AND saving those images into the SD folder (i think). I think it's done with this code, but how does it works. Like with c.takepicture, it takes long to save and start previewing again, so that's no option to livestream.
It would also be great if someone could help me implement this in a library.
I really hope someone can explain these things as good as possible. That would really much be appreciated.
Thanks!
XverhelstX
I don't know if this here is the right place to post this topic as it's partially related to Basic4Android and Java and a library development.
So I found an intresting piece of code here: CameraView.java - camdroiduni - stream from Android camera to pc - Google Project Hosting
And it's about live streaming from your device's camera to your browser.
But I want to know how the code actually works.
These are the things I want to understand:
1) How do they stream to the webbrowser. I understand that they send a index.html file to the ip adress of the device (on wifi) and that file reloads the page every second. But how do they send the index.html file to the desired ip address with sockets?
2) Here they mention they are using video, but I am still convinced they take pictures and send them as I don't see the mediarecorder anywhere.
Now my question is how they keep sending AND saving those images into the SD folder (i think). I think it's done with this code, but how does it works. Like with c.takepicture, it takes long to save and start previewing again, so that's no option to livestream.
B4X:
public synchronized byte[] getPicture()
{
try
{
while (!isPreviewOn) wait();
isDecoding = true;
mCamera.setOneShotPreviewCallback(this);
while (isDecoding) wait();
}
catch (Exception e)
{
return null;
}
return mCurrentFrame;
}
private LayoutParams calcResolution ( int origWidth,
int origHeight,
int aimWidth,
int aimHeight )
{
double origRatio = (double)origWidth/(double)origHeight;
double aimRatio = (double)aimWidth/(double)aimHeight ;
if (aimRatio>origRatio)
return new LayoutParams(origWidth,(int)(origWidth/aimRatio));
else
return new LayoutParams((int)(origHeight*aimRatio),origHeight);
}
private void raw2jpg(int[] rgb, byte[] raw, int width, int height)
{
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++)
{
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++)
{
int y=0;
if( yp < raw.length)
{
y = (0xff & ((int) raw[yp])) - 16;
}
// int y = (0xff & ((int) raw[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0)
{
if(uvp<raw.length)
{
v = (0xff & raw[uvp++]) - 128;
u = (0xff & raw[uvp++]) - 128;
}
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) &
0xff0000) | ((g >> 2) &
0xff00) | ((b >> 10) &
0xff);
}
}
}
@Override
public synchronized void onPreviewFrame(byte[] data, Camera camera)
{
int width = mSettings.PictureW() ;
int height = mSettings.PictureH();
// // API 8 and above
// YuvImage yuvi = new YuvImage(data, ImageFormat.NV21 , width, height, null);
// Rect rect = new Rect(0,0,yuvi.getWidth() ,yuvi.getHeight() );
// OutputStream out = new ByteArrayOutputStream();
// yuvi.compressToJpeg(rect, 10, out);
// byte[] ref = ((ByteArrayOutputStream)out).toByteArray();
// API 7
int[] temp = new int[width*height];
OutputStream out = new ByteArrayOutputStream();
// byte[] ref = null;
Bitmap bm = null;
raw2jpg(temp, data, width, height);
bm = Bitmap.createBitmap(temp, width, height, Bitmap.Config.RGB_565);
bm.compress(CompressFormat.JPEG, mSettings.PictureQ(), out);
/*ref*/mCurrentFrame = ((ByteArrayOutputStream)out).toByteArray();
//
// mCurrentFrame = new byte[ref.length];
// System.arraycopy(ref, 0, mCurrentFrame, 0, ref.length);
isDecoding = false;
notify();
}
}
It would also be great if someone could help me implement this in a library.
I really hope someone can explain these things as good as possible. That would really much be appreciated.
Thanks!
XverhelstX
Last edited: