I have come across posts saying that the image doesn’t render on the iOS simulator. In order to generate an image, you get the pixels using this call.
CIImage* ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
This will always fail on the simulator.
The workaround to fix this would be creating a frame of your own, and then reading the pixels yourself into the buffer. Here is the code to do that:
NSDictionary* options =@{ (NSString*)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };
CVPixelBufferRef pixelBuffer = NULL;
CVReturn err = CVPixelBufferCreate( NULL, width, height,
(OSType)HSPixelFormatToCoreVideoPixelFormat(inPixelFormat), (__bridge CFDictionaryRef)options, &pixelBuffer);
Here comes the crucial part, in the simulator you would manually need to read the pixels into the buffer.
glReadPixels(0, 0, mWidth, mHeight, GL_BGRA, GL_UNSIGNED_BYTE, videoFrame->GetBuffer());
This should fix the issue your facing. videoFrame is a struct you can create on your own.
Leave a Reply