​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​

Can the eye tracking image also be used in OpenCV on Android?

asked 27 Jul '15, 20:15

phyatt's gravatar image

phyatt ♦♦
15791954
accept rate: 8%


It should work. We don't have an Android + OpenCV posted yet, but it should be pretty straight forward.

We do have a very similar example for Windows with OpenCV... a lot of the code for OpenCV will be similar.

The attached files are found in the QuickStart project in the Quick Link2 SDK download on the support page.

http://www.eyetechds.com/support.html

Below is how it is accessed on Windows (this is from DisplayVideo.cpp in the QuickStart example):

// Create local members.
QLFrameData frame;
QLError qlerror = QL_ERROR_OK;

// Get a frame from the device. If there was an 
// error getting the frame then return an error.
if((qlerror = QLDevice_GetFrame(deviceId, 5000, &frame)) != QL_ERROR_OK)
{
    printf_s("Error getting frame from device. Error = %d\n", qlerror);  
    return DEC_ERROR;
}

// Create some local pointers to OpenCV image objects.
IplImage* ql2Image;
IplImage* displayImage;

// Create the OpenCV image objects and initialize the local pointers. 
// The image from Quick Link 2 is 8 bit grey scale and the pixel data 
// buffer is allocated in Quick Link 2 so only create an image header. 
// The image that will be displayed has other colored things that are 
// drawn on it so and its buffer is not created elsewhere so allocate 
// three bytes per pixel.
ql2Image = cvCreateImageHeader(cvSize(frame.ImageData.Width, frame.ImageData.Height), 8, 1);
displayImage = cvCreateImage(cvSize(frame.ImageData.Width, frame.ImageData.Height), 8, 3);

// Create an OpenCV window for displaying the image
std::string windowName = "Quick Link 2 Image";
cvNamedWindow(windowName.c_str(), 1);

// Create a some local members that will be used for 
// displaying text information on the display image.
CvFont font;
cvInitFont(&font,CV_FONT_HERSHEY_SIMPLEX|CV_FONT_ITALIC, 1, 1, 0, 1);
const int textBufferSize = 256;
char textBuffer[textBufferSize];
int fontSpacing = 30;
int fontSpacingMultiplier = 1;

// Create some other variables
bool success = true;
int waitKeyReturnValue = 0;

// Display the image and then get a new image. If a new image was retrieved 
// successfully then loop through again until an image was not successfully 
// retrieved or until the user preses esc.
do
{
    // Reset the font spacing multiplier. The font spacing multiplier 
    // determines the line on which the text will be displayed. 
    fontSpacingMultiplier = 1;

    // Assign the pixel data buffer pointer in the OpenCV image to the 
    // pixel data buffer in the Quick Link 2 frame data.
    ql2Image->imageData = (char*)frame.ImageData.PixelData;

    // Copy the grey scale image to the color image buffer so it can be displayed.
    if(ql2Image->imageData != 0)
        cvCvtColor(ql2Image, displayImage, CV_GRAY2RGB);

    // Place some instructions on the image for the user.
    sprintf_s(textBuffer, textBufferSize, "Press ENTER to continue");
    cvPutText(displayImage, textBuffer, cvPoint(0, fontSpacing * fontSpacingMultiplier++), &font, CV_RGB(255,0,0));
    sprintf_s(textBuffer, textBufferSize, "Press ESC to exit");
    cvPutText(displayImage, textBuffer, cvPoint(0, fontSpacing * fontSpacingMultiplier++), &font, CV_RGB(255,0,0));

    // If the left was was found then mark the pupil and the glints;
    if(frame.LeftEye.Found)
    {
        DrawCross(displayImage, 
        cvPoint((int)frame.LeftEye.Pupil.x, (int)frame.LeftEye.Pupil.y), 
        10, CV_RGB(0,255,0), 1);

        DrawCross(displayImage, 
        cvPoint((int)frame.LeftEye.Glint0.x, (int)frame.LeftEye.Glint0.y), 
        5, CV_RGB(0,255,0), 1);

        DrawCross(displayImage, 
        cvPoint((int)frame.LeftEye.Glint1.x, (int)frame.LeftEye.Glint1.y), 
        5, CV_RGB(0,255,0), 1);
    }

    // If the right was was found then mark the pupil and the glints;
    if(frame.RightEye.Found)
    {
        DrawCross(displayImage, 
        cvPoint((int)frame.RightEye.Pupil.x, (int)frame.RightEye.Pupil.y), 
        10, CV_RGB(255,0,0), 1);

        DrawCross(displayImage, 
        cvPoint((int)frame.RightEye.Glint0.x, (int)frame.RightEye.Glint0.y), 
        5, CV_RGB(255,0,0), 1);

        DrawCross(displayImage, 
        cvPoint((int)frame.RightEye.Glint1.x, (int)frame.RightEye.Glint1.y), 
        5, CV_RGB(255,0,0), 1);
    }

    // Display the image in the OpenCV window.
    cvShowImage(windowName.c_str(), displayImage);
    success = ((qlerror = QLDevice_GetFrame(deviceId, 10000, &(frame))) == QL_ERROR_OK);

    // Check for user input.
    waitKeyReturnValue = cvWaitKey(1);

// if the user pressed escape or enter or if the image was not retrieved 
// from the device successfully then quit the loop.
} while((waitKeyReturnValue != cvWaitKeyEnter)  && (waitKeyReturnValue != cvWaitKeyEsc) && success);

// Destroy the OpenCV window and memory.
cvReleaseImageHeader(&(ql2Image));
cvReleaseImage(&(displayImage));
cvDestroyWindow(windowName.c_str());

In Android we currently return the ByteArray for the image and then we have an example of pushing it to a Surface. The LiveViewFragment shows off how to do this.

https://gitlab.eyetechds.com/android_developers_public/aeye_usb_ref/blob/master/AEyeTabs/src/com/example/aeyetabs/LiveViewFragment.java

Below is some simplified code of just accessing the ByteArray and pushing it to a Bitmap object.

SurfaceHolder surfaceHolder;

surfaceHolder = getHolder();

Bitmap rawGrayImage = null;
QLFrameData frame = null;
frame = qlDevice.getFrame(1000);

rawGrayImage = Bitmap.createBitmap(frame.imageData.width,
        frame.imageData.height, Bitmap.Config.ALPHA_8);

rawGrayImage.copyPixelsFromBuffer(frame.imageData.pixelData);

surfaceHolder.getSurface().isValid()

Canvas canvas = surfaceHolder.lockCanvas();

canvas.drawBitmap(rawGrayImage, m_drawMatrix, paint);

surfaceHolder.unlockCanvasAndPost(canvas);

The remaining work to get this in OpenCV for Android is making sure that you can get that ByteArray to the OpenCV implementation/structures for Android.

It probably would use the same or similar classes like are used in the QuickStart example.

link

answered 27 Jul '15, 20:17

phyatt's gravatar image

phyatt ♦♦
15791954
accept rate: 8%

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "Title")
  • image?![alt text](/path/img.jpg "Title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Tags:

×12
×2
×1
×1
×1

Asked: 27 Jul '15, 20:15

Seen: 1,809 times

Last updated: 27 Jul '15, 20:17

Copyright © 2014-2017 EyeTech Digital Systems Inc. All rights reserved. | About | FAQ | Privacy | Support | Contact | Powered by BitNami OSQA