5604n - DVI Article Summary Article Summary - The Evolution of DVI System Software Group 2 Carolyn O'Hare Lauren Barton Robert Ryan Nelson Kile Martin Falck This article discusses DVI, the hardware and software tools used by DVI to integrate multimedia into a desktop computing environment. It begins with a description of the first generation of DVI products that ran on IBM-computers under the MS-DOS operating system. They describe in detail the conceptual model that they refer to as a "super VCR." This system has three parts: The Real-Time Executive, the Audio/Video Subsystem and a special graphics library. The next generation of DVI has evolved due to advances in hardware and the desire to move DVI to other platforms and operating environments. The new system software model is called the "digital video production studio." The "digital video production studio" has several components including the analog interface, the display system, the sampler, the stream manager, the effects processor and the mixer. The "digital video production studio" model drove Intel's second generation of DVI system software known as the Audio Video Kernel. The design goals of the Audio Video Kernel (AVK) system architecture are as follows: - Design a system that would be portable to different host platforms and operating environments. - Allow the system to expand as the power of the hardware increase. - Keep the reliance on the host CPU at a minimum. The Audio/Video Library, Audio/Video Drive and the microcode engine all make up the layers of the Audio/Video Kernel. Each layer is discussed in detail in the article. The article concludes with an in-depth discussion of the AVK. =============================================================== From: (Group 5) Shirley Carr Mike Joyce Zakia Khan Vas Madhava Article Summary (GREE92): The Evolution of DVI System Software Multimedia (MM) has put increased the strain on computer systems. The DVI system, a set of hardware and software tools, was developed to deal with it. This system includes a PC or Workstation, the ActionMedia playback card, system software and a CD-ROM. With this configuration and some peripherals one can create MM elements on disk or on a network. The ActionMedia system is composed of three main subsystems: 1) RTE - The real time executive. This provides real time multitasking support to the AVSS. It also takes full control of the interrupt vectors. 2) AVSS - The audio/video subsystem. This controls playback of digital audio and video files. 3) Graphic Library - supports special purpose video effects as well as drawing primitives. The model used to description the systems is "Super VCR" which has the standard Stop, Rewind, Play, Pause and Fast Forward buttons along with special effects, which are functions from the graphics library. The AVSS is composed of three parallel tasks: 1) Server task - Reads frames of compressed video into memory. 2) Decode task - Requests that the frame be decompressed by the pixel processor. 3) Display task - displays it on the monitor. All of these must be done at 30 frames/second. The system doesn't use the DOS BIOS because it is too slow. Two new requirements to the system have made the Super-VCR model obsolete: 1) It should be portable across all operating systems. 2) It should have decreased reliance on the host CPU for realtime response. The new conceptual model makes the system more like production studio with the following components: 1) Analog Interface - a patch bay that allows various devices to connect together, with each connection being a "physical channel". 2) The Display System - controls the display of the visual data on the screen, possibly handling multiple views. 3) The Sampler - which can be done for video, still images just like for audio. 4) The Stream Manager - responsible for managing the flow of data from 1 device to another. But these are logical channels, not physical. 5) The Effects Processor - Used to add image effects and graphics to stills and videos. 6) The Mixer - This is the heart of the studio. It assigns incoming streams to output channels and gives some ability to modify the data. Bandwidth Saturation occurs when (a) too many inputs go to 1 output or (b) when too many effects are installed concurrently or (c) if too many devices are talking and the bus gets overloaded. The studio model was used to drive the design of the second generation DVI system, the AVK, the audio/video kernel. The AVK system uses the 82750B pixel processor, and makes use of multiple operating environments and has windowing support. The design goals for it were: 1) Make it portable across many host platforms 2) Be expandable as the system hardware power increases. 3) Decrease reliance on the host CPU. The AVK system architecture is composed of the following: 1) Environment specific API were used to : a) do read/write to the host file system. b) integrate AVK to the environment's windowing system. 2) Audio/Video Library - Has most of the system functionality. - Has data types which are generalized to streams and groups. - The VCR functions are also handled here. - Has C code functions used for memory management, etc. 3) Audio/Video Driver - Encapsulates knowledge of ActionMedia hardware and isolates it. 4) Microcode Engine - Has function DoMotion() that manages decompression - Had function CopyScale() that scales video images to the display buffer. Objects in the Audio/Video library include: 1) Analog interface which has 2 objects: the AVK session and AVK device 2) Stream Manager (tapedeck) - implemented as a collection of objects that control digital data streams. 3) Connector (mixer) - Has high level abstraction of the copy program 4) View - displays the subsystems visual region 5) Sampler - the image and the image buffer. Separating the decompression area from the display bitmap allowed for the copy/scale function. Thus different resolution screens could be driven as well as being able to drive multiple simultaneous video windows. Real time video and audio is mainly due to the programmability of the 82750B. And within it, this is done by the microcode engine. Two types of video algorithms are implemented in the system: PLV (Production Level Video) and RTV (Real Time Video). The former refers to video compressed off line to give the best possible image and the latter refers to full motion video where compression/decompression is done in real time. For still images many algorithms are supported including JPEG. For audio ADPCM 4E and PCM are supported. Overall, the system using the 82750B chip is good because: 1) Leaves the CPU alone, so it can do other tasks and similarly, does not require the use of another host processor. 2) It is more portable. 3) It is programmable by the host. =============================================================== MM Article Summaries by Group I: Fitzgerald, Kalafut, Klein, and Muhlenburg. "The Evolution of DVI System Software" by James L. Green As computing has evolved, space-consuming multimedia has developed a need for real-time decompression and high I/O bandwidth. DVI multimedia products integrate these new non-textual data types in a desktop environment. DVI development started in 1983 with the first product suite released in 1989. DVI has continued to evolve along with more powerful hardware and software. The original ActionMedia Software System utilized the "Super VCR" conceptual model. The AVSS uses 3 parallel operations - the server task, the decode task, and the display task. The RTX schedules all the CPU's tasks with AVSS tasks having the highest priority including preemption. The AVSS/RTX "Super VCR" model could not handle everything, so the next generation DVI systems required a different conceptual model. This new conceptual model is the digital video production studio which includes the analog interface to physical channels, the display system, the sampler or sampling synthesizer, the stream manager, the effects processor, and the audio/video mixer. Cases were used to evaluate that the new model was a superset of the "Super VCR" model. This digital production studio model drove Intel's second generation called the Audio Video Kernel (AVK). The 3 goals of the AVK were portability, expandability, and non-CPU- reliance. The AVK consisted of 4 layers: DoMotion, the A/V Driver, the A/V Library, and an environment-specific layer for read/write operations and integration with windowing systems. The AVK interface contains objects corresponding to aspects of the studio model. The AVK uses a more flexible buffering scheme thanks to the new 82750PB pixel processor, which provides a "microcode engine" for task scheduling. So the AVK uses the 82750PB as a co- processor, not a slave like the AVSS/RTX. All in all, the AVK implementation of the digital video production studio accomplishes everything that it set out to accomplish.