CHDK Wiki
No edit summary
(→‎Palette: Added bitmap structure information, tweaked palette information.)
(6 intermediate revisions by 2 users not shown)
Line 62: Line 62:
 
2 BA987654 3210
 
2 BA987654 3210
 
3 76543210 BA98
 
3 76543210 BA98
  +
  +
===14-bit===
  +
The [[G1 X]] has a 14 bit raw buffer.
   
 
==Location==
 
==Location==
Line 96: Line 99:
   
 
==Palette==
 
==Palette==
  +
In general, there are 4 different bitmap types:
 
  +
*0: no palette info
The palette provided by the camera consist of 16 integers, each representing a colour as AYUV (both LE and BE are used in different cameras).
 
  +
*1: 16 x 4 byte AYUV values
  +
*2: 16 x 4 byte AYUV (A = 0..3 LUT)
  +
*3: 256 x 4 byte AYUV (A = 0..3 LUT)
  +
The data can be presented either little endian or big endian.
   
 
The 8-bit indices in the bitmap buffer are actually two 4-bit indices into the palette. The two colours obtained this way are combined by adding the respective YUV components and averaging the alpha component. Note that U and V are signed bytes here.
 
The 8-bit indices in the bitmap buffer are actually two 4-bit indices into the palette. The two colours obtained this way are combined by adding the respective YUV components and averaging the alpha component. Note that U and V are signed bytes here.
Line 103: Line 110:
 
Converting the YUV components to RGB is done as described for the viewport buffer.
 
Converting the YUV components to RGB is done as described for the viewport buffer.
   
N.B.: On newer cameras (DryOS?) the alpha component is always either 0 or 3. It is not exactly know how this translates to actual alpha values. Using 255 for 3 seems to produce a reasonably decent result.
+
<p style="margin-top:1em;margin-bottom:1em;">On some cameras the alpha component is always either 0 or 3. It is not exactly know how this translates to actual alpha values. Using 255 for 3 seems to produce a reasonably decent result.</p>
 
On newer cameras the palette is 256x4 integers in AYUV format. The alpha (A) value is from 0 - 3. Converting as follows gives good results when displayed on a Windows PC - 0 --> 128, 1 --> 171, 2 --> 214 and 3 --> 255. One special case is required - the alpha value for palette entry 0 should be set to 0 (completely transparent) for display on a Windows PC. This has been confirmed on the G12, SX30 IS, IXUS 310 HS and SX260 HS. The palette provided by the camera depends on the exact camera model and may vary with user interface scope (e.g. record vs playback vs menu)  Colors defined for use by CHDK can be found in core/gui_draw.h . The colors (AYUV data) in the palette (see also [[Palette]]) depend on the exact camera model and context as well, and additionally depends on whether or not CHDK overrides certain colors (CAM_LOAD_CUSTOM_COLORS) and can be retrieved using core/live_view.c : live_view_get_data with flag LV_TFR_PALETTE .
 
  +
[[Category:Development]]
Note: On even newer cameras the palette is now 256 integers in AYUV format. The alpha (A) value is from 0 - 3. Converting as follows gives good results when displayed on a Windows PC - 0 --> 128, 1 --> 171, 2 --> 214 and 3 --> 255. One special case is required - the alpha value for palette entry 0 should be set to 0 (completely transparent) for display on a Windows PC. This has been confirmed on the G12, SX30 IS and IXUS 310 HS.
 
   
 
==Forcing Firmware Redraw And Locking==
 
==Forcing Firmware Redraw And Locking==
Line 147: Line 154:
 
The viewport buffer is used for the live view from the camera, or images in playback.
 
The viewport buffer is used for the live view from the camera, or images in playback.
   
There are three different viewport buffers. One for playback mode, two for record mode. One of the buffers for record mode is triple-buffered (the other "single-buffered). Unlike the bitmap buffers, the triple-buffered buffers need not follow each other in memory.
+
There are at least three different viewport buffers. One for playback mode, two for record mode. One of the buffers for record mode is triple-buffered (the other "single-buffered). Unlike the bitmap buffers, the triple-buffered buffers need not follow each other in memory.
   
 
TODO: Explain difference between the two record mode buffers.
 
TODO: Explain difference between the two record mode buffers.
TODO: Fast MD relies on known the current buffer, describe...
+
TODO: Fast MD relies on knowing the current buffer, describe...
  +
  +
On many cameras, the dimensions of the viewport vary depending on factors such as shooting mode, digital zoom and TV out status. In record mode, the [[Event Procedure]]s GetVRAMHPixelsSize and GetVRAMVPixelsSize return the actual dimensions. For width, the value is the number of Y values.
  +
  +
The [[wikipedia:Pixel_aspect_ratio|pixel aspect ratio]] may also vary between modes.
   
 
==Format==
 
==Format==
Line 183: Line 194:
   
 
==Location (TODO)==
 
==Location (TODO)==
  +
  +
The string "VRAM Address" can frequently be used to identify one of the record mode addresses (suitable for vid_get_viewport_fb).
   
 
==In CHDK==
 
==In CHDK==
Line 207: Line 220:
   
   
=JPEG (TODO)=
+
=JPEG=
  +
==Description==
Jpeg buffer is mentioned in the firmware, might be interesting
 
  +
The JPEG buffer is a memory area which is the target of the compression process. It's pure JPEG data, the Exif is placed into a separate buffer.
  +
==Location==
  +
The JPEG buffer (address, size, content) can be caught in the right moment by hooking FileWriteTask. For more information, see http://chdk.setepontos.com/index.php?topic=7366.0
   
  +
=YUV (for JPEG)=
  +
==Description==
  +
On some cameras the RAW buffer is converted to an intermediate YUV (UYVY) buffer before the jpeg process for some image sizes. On digic II cameras, this only seems to be used for jpeg sizes that are not full width (i.e. not Wide or L) Some discussion in http://chdk.setepontos.com/index.php?topic=4338.msg90451#msg90451
   
=Movie (TODO)=
+
=Movie=
  +
==Description==
Some information can be found here: http://chdk.setepontos.com/index.php?topic=7067.0
 
  +
The movie framebuffers are used as uncompressed source for the movie's video frames. In some cases, the live view buffers (see above) have this additional purpose, in other cases separate buffers are created, when needed. In every case encountered so far, these buffers are triple buffered (i.e. there are 3 of them).
  +
  +
These buffers are active, when the current recording mode is one of the movie modes (even when idle). The resolution of the source picture usually does not change when an actual recording process is started. One exception so far is the A410 model, in its 640x480 mode (the idle resolution is halved vertically). The horizontal (source) resolution will change however, depending on whether the TV-out plug is used. With an active TV-out, the source picture width (in pixels) decreases from 720 to 704 (or 352 from 360), on the fly. The state of TV-out doesn't influence the non-shared special buffers (see the DIGIC III example below).
  +
  +
==Format==
  +
In earlier VxWorks (DIGIC II) cameras, the pixel format of these buffers always seems to be Y411 (the same 3 buffers are used for live view ''and'' movie buffer). In an early DryOS r23, DIGIC III camera, lower resolution movie modes work the same way as in the previously mentioned DIGIC II models. In its "high" (i.e. VGA) resolution movie mode, 3 new buffers are created and operated parallel with the live view buffers. The pixel format for these special buffers is UYVY.
 
Additional information can be found here: http://chdk.setepontos.com/index.php?topic=7067.0
 
[[Category:Development]]
 
[[Category:Development]]

Revision as of 01:03, 15 February 2013

Introduction

This page describes the various frame buffers available in the cameras.


Raw

Description

The raw buffer contains the raw image data from the sensor for still photos. It only contains valid raw data during the shooting process. The same address space may be used for other things at other times.

Some cameras have a single raw buffer, while others alternate between two (or more ?).

Formats

All raw buffers contain bayer data. Two different bayer patterns are used on CHDK cameras.

The first, more common variant is

Red  Green  Green  Blue

The second is

Green  Blue  Red  Green

The total size can usually be found with the string "CRAW BUF SIZE"

The dimensions may be found with the string "CrwAddress %lx, CrwSize H %ld V %ld" (TODO other cameras)

The raw buffer may be either 10 bit or 12 bit, packed little endian, as detailed below.

The ASCII diagrams below show how the bits in memory make up pixels.

  1. Bytes are numbered from left to right
  2. Pixels are numbered from top to bottom
  3. Bits of a given pixel are numbered in the table itself. For the 12 BPP format, the letters A and B stand for 10 and 11

So the first pixel in the ten bit format has it's least significant two bits in the least significant two bits of the first byte, and the most significant 8 bits in the second byte.

See the code of tools/rawconvert.c or core/raw.c for examples of how to read each possible position.

10-bit

Most pre Digic IV cameras use this format

    0      |1      |2      |3      |4      |5      |6      |7      |8      |9      |
    xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 
0   10      98765432                                                                
1     987654                3210                                                   
2                   543210      9876                                              
3                         98                76543210                                
4                                   98765432                10                      
5                                                   3210      987654                
6                                                       9876                543210  
7                                                                   76543210      98


12-bit

Most Digic IV cameras use this format

    0      |1      |2      |3      |4      |5      |
    xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
0   3210    BA987654                                
1       BA98                76543210                
2                   BA987654                3210    
3                                   76543210    BA98

14-bit

The G1 X has a 14 bit raw buffer.

Location

One address can generally be found using the string "CRAW BUF".

TODO: describe how to find the other where needed.

In CHDK

TODO: add function etc.

Used for:

  • Raw/DNG images
  • bad pixel removal
  • shot_histogram
  • curves
  • raw develop


Bitmap (or Overlay)

Description

An overlay which is used to display the camera UI.

The overlay is double-buffered. The second buffer directly follows the first. TODO: describe usage

Format

The bitmap buffer is 8-bit, indexed. The palette varies between models, and in between playback and record modes on some models.

The most common size is 360x240, but 720x240 and other sizes occur. The buffer may not be the same size as the actual display area or the viewport buffer.

Bitmap pixels are not square on some cameras, for example on those with a 720x240 buffer.

The buffer can be bigger than the actual image. If the buffer width is bigger than the image width, each row of pixels is padded to the buffer width. Similarly, if the buffer height is bigger than the image height, the last row is followed by padding to get to the buffer height. The latter is important when you want to get the location of the second bitmap buffer.

Palette

In general, there are 4 different bitmap types:

  • 0: no palette info
  • 1: 16 x 4 byte AYUV values
  • 2: 16 x 4 byte AYUV (A = 0..3 LUT)
  • 3: 256 x 4 byte AYUV (A = 0..3 LUT)

The data can be presented either little endian or big endian.

The 8-bit indices in the bitmap buffer are actually two 4-bit indices into the palette. The two colours obtained this way are combined by adding the respective YUV components and averaging the alpha component. Note that U and V are signed bytes here.

Converting the YUV components to RGB is done as described for the viewport buffer.

On some cameras the alpha component is always either 0 or 3. It is not exactly know how this translates to actual alpha values. Using 255 for 3 seems to produce a reasonably decent result.

On newer cameras the palette is 256x4 integers in AYUV format. The alpha (A) value is from 0 - 3. Converting as follows gives good results when displayed on a Windows PC - 0 --> 128, 1 --> 171, 2 --> 214 and 3 --> 255. One special case is required - the alpha value for palette entry 0 should be set to 0 (completely transparent) for display on a Windows PC. This has been confirmed on the G12, SX30 IS, IXUS 310 HS and SX260 HS. The palette provided by the camera depends on the exact camera model and may vary with user interface scope (e.g. record vs playback vs menu)  Colors defined for use by CHDK can be found in core/gui_draw.h . The colors (AYUV data) in the palette (see also Palette) depend on the exact camera model and context as well, and additionally depends on whether or not CHDK overrides certain colors (CAM_LOAD_CUSTOM_COLORS) and can be retrieved using core/live_view.c : live_view_get_data with flag LV_TFR_PALETTE .

Forcing Firmware Redraw And Locking

To force the firmware to redraw its version of the overlay - for example, after closing the CHDK ALT menu - the firmware has a function typically referred to as RefreshPhysicalScreen. Note that the declaration of RefreshPhysicalScreen in include/lolevel.h gives it a long argument, but for some cameras (DryOS?) this parameter is not used. All calls in the CHDK code supply 1 as argument.

On some cameras (DryOS?) the firmware also provides functions ScreenLock and ScreenUnlock (also ScreenUnLock) to (dis)allow the firmware to draw the overlay. In such cameras these functions are typically needed to avoid having the firmware write over the image drawn by CHDK. The locking mechanism uses a counter (enabled_refresh_physical_screen in CHDK) to keep track of the difference between the number of calls to ScreenLock and ScreenUnlock. Note that ScreenUnlock not decreases the counter, but usually also calls RefreshPhysicalScreen when it reaches 0.

For some firmwares that have this locking mechanism, the ScreenUnlock and RefreshPhysicalScreen functions are actually one and the same. Because of this, just calling RefreshPhysicalScreen to force the firmware to draw its overlay is not safe. It will cause an "unlock" which might result in undesired behaviour later on. Also, it will actually only do the refresh if the lock allows it after unlocking it once (that is, when enabled_refresh_physical_screen is 0). If one wants to just refresh the overlay in these cases, they will have to carefully handle the locking mechanism.

To facilitate the use of locking and refreshing, CHDK uses three functions. With vid_turn_off_updates and vid_turn_on_updates one can respectively call ScreenLock or ScreenUnlock (if available). The function vid_bitmap_refresh provides a wrapper around RefreshPhysicalScreen that takes care of the potential locking difficulties.

Location (TODO)

In CHDK

N.B.: The return values of some functions might not be exactly as described but as needed by the current CHDK code. (TODO: how exactly?)

Functions:

  • include/platform.h
    • void *vid_get_bitmap_fb(): get base address of the bitmap buffers
    • long vid_get_bitmap_screen_width(): get width of actual image
    • long vid_get_bitmap_screen_height(): get height of actual image
    • long vid_get_bitmap_buffer_width(): get width of buffer
    • long vid_get_bitmap_buffer_height(): get height of buffer
    • void vid_bitmap_refresh(): force redrawing of the overlay by the firmware
    • void vid_turn_off_updates(): block drawing of the overlay by the firmware
    • void vid_turn_on_updates(): allow the firmware to be draw the overlay again
    • long vid_is_bitmap_shown(): unused and only trivially implemented for A610 1.00f; should probably not be used and removed from CHDK

Defines:

  • CAM_BITMAP_PALETTE: defines which palette the camera uses (defined in include/camera.h, platform/<platform>/platform_camera.h, used in core/gui_draw.h)
  • COLOR_*: palette colours (defined in core/gui_draw.h)


Used for:

  • UI


Viewport

Description

The viewport buffer is used for the live view from the camera, or images in playback.

There are at least three different viewport buffers. One for playback mode, two for record mode. One of the buffers for record mode is triple-buffered (the other "single-buffered). Unlike the bitmap buffers, the triple-buffered buffers need not follow each other in memory.

TODO: Explain difference between the two record mode buffers. TODO: Fast MD relies on knowing the current buffer, describe...

On many cameras, the dimensions of the viewport vary depending on factors such as shooting mode, digital zoom and TV out status. In record mode, the Event Procedures GetVRAMHPixelsSize and GetVRAMVPixelsSize return the actual dimensions. For width, the value is the number of Y values.

The pixel aspect ratio may also vary between modes.

Format

In the viewport buffer pixels are grouped in 4. Each group is represented in six bytes. Each pixel is represented by a YUV triplet where the U and V components are shared amongst the group. The order of the bytes is as follows, with Yi the Y component of the ith pixel in the group:

U Y1 V Y2 Y3 Y4

Note that U and V are signed bytes, while the Y components are a unsigned bytes.

To convert a YUV triplet to an RGB triplet, one can use the following code:

function clip(val):
  if val < 0:
    return 0
  else if val > 255:
    return 255
  else:
    return val

R = clip( ((Y << 12)            + 5742 * V + 2048) >> 12 )
G = clip( ((Y << 12) - 1411 * U - 2925 * V + 2048) >> 12 )
B = clip( ((Y << 12) + 7258 * U            + 2048) >> 12 )

N.B.: The constants in the code above differ from the ones in one of the threads below. The code does (seem to) work.

As with the bitmap buffers, viewport buffers might be wider (and higher) than the actual image. Padding is done similarly. The height of a viewport buffer, however, is not really relevant as one doesn't need to look beyond the last row of the actual image.

With newer cameras the actual image might be offset at a different location than (0,0). This means that there can be some padding rows at the start of the buffer as well as some padding bytes at the start of each row.

See also http://chdk.setepontos.com/index.php?topic=5415.0 and http://chdk.setepontos.com/index.php?topic=6072.0 .

The format used appears to be similar to http://www.fourcc.org/yuv.php#Y411

Some information on converting these formats can be found on http://www.fourcc.org/fccyvrgb.php

Location (TODO)

The string "VRAM Address" can frequently be used to identify one of the record mode addresses (suitable for vid_get_viewport_fb).

In CHDK

N.B.: The return values of some functions might not be exactly as described but as needed by the current CHDK code. (TODO: how exactly?)

Functions:

  • include/platform.h
    • void *vid_get_viewport_fb_d();: get "single-buffered" buffer for playback mode
    • void *vid_get_viewport_fb();: get "single-buffered" buffer for record mode
    • void *vid_get_viewport_live_fb();: get triple-buffered buffer for record mode
    • int vid_get_viewport_width();: get the width of the actual image
    • long vid_get_viewport_height();: get the height of the actual image
    • int vid_get_viewport_buffer_width();: get the width of the buffer
    • int vid_get_viewport_xoffset();: get the x offset of the image in the buffer
    • int vid_get_viewport_yoffset();: get the y offset of the image in the buffer
    • int vid_get_viewport_image_offset();: get the offset of the first actual pixel (group) in the buffer (auxiliary function?)
    • int vid_get_viewport_row_offset();: get the number of bytes between rows (auxiliary function?)

Used for:

  • Motion detection
  • Live histogram
  • Zebra
  • Edge overlay


JPEG

Description

The JPEG buffer is a memory area which is the target of the compression process. It's pure JPEG data, the Exif is placed into a separate buffer.

Location

The JPEG buffer (address, size, content) can be caught in the right moment by hooking FileWriteTask. For more information, see http://chdk.setepontos.com/index.php?topic=7366.0

YUV (for JPEG)

Description

On some cameras the RAW buffer is converted to an intermediate YUV (UYVY) buffer before the jpeg process for some image sizes. On digic II cameras, this only seems to be used for jpeg sizes that are not full width (i.e. not Wide or L) Some discussion in http://chdk.setepontos.com/index.php?topic=4338.msg90451#msg90451

Movie

Description

The movie framebuffers are used as uncompressed source for the movie's video frames. In some cases, the live view buffers (see above) have this additional purpose, in other cases separate buffers are created, when needed. In every case encountered so far, these buffers are triple buffered (i.e. there are 3 of them).

These buffers are active, when the current recording mode is one of the movie modes (even when idle). The resolution of the source picture usually does not change when an actual recording process is started. One exception so far is the A410 model, in its 640x480 mode (the idle resolution is halved vertically). The horizontal (source) resolution will change however, depending on whether the TV-out plug is used. With an active TV-out, the source picture width (in pixels) decreases from 720 to 704 (or 352 from 360), on the fly. The state of TV-out doesn't influence the non-shared special buffers (see the DIGIC III example below).

Format

In earlier VxWorks (DIGIC II) cameras, the pixel format of these buffers always seems to be Y411 (the same 3 buffers are used for live view and movie buffer). In an early DryOS r23, DIGIC III camera, lower resolution movie modes work the same way as in the previously mentioned DIGIC II models. In its "high" (i.e. VGA) resolution movie mode, 3 new buffers are created and operated parallel with the live view buffers. The pixel format for these special buffers is UYVY. Additional information can be found here: http://chdk.setepontos.com/index.php?topic=7067.0