In the field of video compression a video frame is compressed using different algorithms with different advantages and disadvantages, centered mainly around amount of data compression. These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I, P and B. They are different in the following characteristics:
I‑frames are the least compressible but don't require other video frames to decode.
P‑frames can use data from previous frames to decompress and are more compressible than I‑frames.
B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression.
Summary
Three types of pictures are used in video compression: I, P, and B frames. An I‑frame is a complete image, like a JPG or BMP image file. A P‑frame holds only the changes in the image from the previous frame. For example, in a scene where a car moves across a stationary background, only the car's movements need to be encoded. The encoder does not need to store the unchanging background pixels in the P‑frame, thus saving space. P‑frames are also known as delta‑frames. A B‑framesaves even more space by using differences between the current frame and both the preceding and following frames to specify its content.
Pictures/frames
While the terms "frame" and "picture" are often used interchangeably, the term picture is a more general notion, as a picture can be either a frame or a field. A frame is a complete image, and a field is the set of odd-numbered or even-numbered scan lines composing a partial image. For example, an HD 1080 picture has 1080 lines of pixels. An odd field consists of pixel information for lines 1, 3, 5...1079. An even field has pixel information for lines 2, 4, 6...1080. When video is sent in interlaced-scan format, each frame is sent in two fields, the field of odd-numbered lines followed by the field of even-numbered lines. A frame used as a reference for predicting other frames is called a reference frame. Frames encoded without information from other frames are called I-frames. Frames that use prediction from a single preceding reference frame are called P-frames. B-frames use prediction from a average of two reference frames, one preceding and one succeeding.
Slices
In the H.264/MPEG-4 AVC standard, the granularity of prediction types is brought down to the "slice level." A slice is a spatially distinct region of a frame that is encoded separately from any other region in the same frame. I-slices, P-slices, and B-slices take the place of I, P, and B frames.
Macroblocks
Typically, pictures are segmented into macroblocks, and individual prediction types can be selected on a macroblock basis rather than being the same for the entire picture, as follows:
I-frames can contain only intra macroblocks
P-frames can contain either intra macroblocks or predicted macroblocks
B-frames can contain intra, predicted, or bi-predicted macroblocks
Furthermore, in the H.264 video coding standard, the frame can be segmented into sequences of macroblocks called slices, and instead of using I, B and P-frame type selections, the encoder can choose the prediction style distinctly on each individual slice. Also in H.264 are found several additional types of frames/slices:
SI‑frames/slices : Facilitates switching between coded streams; contains SI-macroblocks.
SP‑frames/slices : Facilitates switching between coded streams; contains P and/or I-macroblocks
Multi‑frame motion estimation increases the quality of the video, while allowing the same compression ratio. SI and SP frames improve error correction. When such frames are used along with a smart decoder, it is possible to recover the broadcast streams of damaged DVDs.
Intra-coded (I) frames/slices (key frames)
I-frames contain an entire image. They are coded without reference to any other frame except themselves.
May be generated by an encoder to create a random access point.
May also be generated when differentiating image details prohibit generation of effective P or B-frames.
Typically require more bits to encode than other frame types.
Often, I‑frames are used for random access and are used as references for the decoding of other pictures. Intra refresh periods of a half-second are common on such applications as digital television broadcast and DVD storage. Longer refresh periods may be used in some environments. For example, in videoconferencing systems it is common to send I-frames very infrequently.
Predicted (P) frames/slices
Require the prior decoding of some other picture in order to be decoded.
May contain both image data and motion vector displacements and combinations of the two.
Can reference previous pictures in decoding order.
Older standard designs use only one previously decoded picture as a reference during decoding, and require that picture to also precede the P picture in display order.
In H.264, can use multiple previously decoded pictures as references during decoding, and can have any arbitrary display-order relationship relative to the picture used for its prediction.
Typically require fewer bits for encoding than I pictures do.
Require the prior decoding of subsequent frame to be displayed.
May contain image data and/or motion vector displacements. Older standards allow only a single global motion compensation vector for the entire frame or a single motion compensation vector per macroblock.
Include some prediction modes that form a prediction of a motion region by averaging the predictions obtained using two different previously decoded reference regions. Some standards allow two motion compensation vectors per macroblock.
In older standards, B-frames are never used as references for the prediction of other pictures. As a result, a lower quality encoding can be used for such B-frames because the loss of detail will not harm the prediction quality for subsequent pictures.
H.264 relaxes this restriction, and allows B-frames to be used as references for the decoding of other frames at the encoder's discretion.
Older standards, use exactly two previously decoded pictures as references during decoding, and require one of those pictures to precede the B-frame in display order and the other one to follow it.
H.264 allows for one, two, or more than two previously decoded pictures as references during decoding, and can have any arbitrary display-order relationship relative to the picture used for its prediction.
The heightened flexibility of information retrieval means that B-frames typically require fewer bits for encoding than either I or P-frames.