LEADTOOLS Multimedia SDKs offer Media Foundation transforms that significantly lower the complexity of code needed to play, capture, convert, stream, and process audio and video data. LEADTOOLS provides interfaces for .NET (C# & VB) and C/C++ to add Media Foundation technology to your development project without dealing with the complexity of programming directly with Microsoft's Media Foundation APIs.
Overview of LEADTOOLS Media Foundation Transform Technology
- Create Media Foundation applications with the ability to compress, decompress, process, stream, and enhance audio and video data
Utilize and control many Media Foundation Transforms
- Use LEADTOOLS proprietary Media Foundation Transforms or any other third-party transform installed on the machine
- Includes .NET (C# & VB) and C DLL libraries for 32 and 64-bit development
What is Media Foundation?
Microsoft Media Foundation was introduced as the eventual replacement of DirectShow. It features many enhancements and improvements for audio and video playback quality, high-definition content, hardware acceleration, and more. Like its predecessor, Media Foundation is a COM-based multimedia framework whose primary object model is based upon Media Foundation Transforms (MFTs) to do the processing work.
What are Media Foundation Components?
Since the entire concept of rendering, converting, and capturing files in Media Foundation is based on components and transforms, it is important to understand the role of each component within the topology.
Media Source. This is usually the first filter in the topology. It is responsible for reading the input data and splitting the media streams. The data may come from a file on disk, a network, a hardware device, or any other method. Each media source contains one or more streams, and each stream delivers data of one type, such as audio or video.
Audio/Video Decoder. These transforms handle the actual decoding or decompression. They do not demultiplex (split), so data should be demultiplexed before it is passed to the decoder. Therefore, they are usually connected to the media source output. For example, the video decoder input might be a compressed video stream such as MPEG-2, and the output could be raw video data.
Renderer. These components are used to actually render data. Data could be audio, video, or both. For example, when playing a media file with both audio and video, an audio renderer would handle directing the audio data to the sound device, and a video renderer would handle displaying the video on the screen. The input of the renderer is usually uncompressed data coming from the decoder.
Audio/Video Encoder. These transforms are used to compress audio or video data. The input is usually uncompressed audio or video data, and the output is the compressed version of the same data.
Media Sink. These components are usually the last transforms in the topology. They are responsible for joining media streams and handle writing the data to disk to create a media file, or they can send the data to some other location, such as over a network.
Audio/Video Processor (Transform). These are usually custom transforms used to perform some type of data processing or generate some type of event. LEAD has created many audio and video processors, such as the Video Resize Transform, used to resize a video stream. These transforms typically handle uncompressed data, so they would be inserted in the topology before the encoder or after the decoder.