UNIT-III
Basics of Video- Anlog and Digital Video, How to use video on
PC, introduction to graphics accelerator cards, directX, introduction to
ad/dv and IEE1394 cards, digitization of analog video to digital video,
interlacing and non interlacing, brief note on various video
standards-NTSC, PAL, SECAM, HDTV. Introduction to video capturing media and
instrument, videodisk, DVCAM, camcorder. Introduction to digital video
compression techniques and various file formats= avi, mpeg, mov, real
video.
Brief introduction to video editing and movie making tools- quick time,
video for windows and adobe premier.
Analog Video
When light
reflected from an object passes through a video camera lens that light is
converted into electronic signals by a special sensor called charged couple
device (CCD). Top-quality broadcast cameras may have as many as three CCDs (one
for each color of red, green, and blue) to enhance the resolution of the
camera. The output of the CCD is processed by the camera into a signal containing
three channels of color information and synchronization pulses (sync): There
are several video standards for managing CCD output, each dealing with the
amount of separation between the components of the signal. The more separation
of the color information in the signal results in higher quality of the image
(and the more expensive the equipment).
If each channel
of color information is transmitted as a separate signal on its own conductor,
the signal output is called RGB (red, green, and blue), which is the preferred
method for higher quality and professional video work. Output can also be split
into two separate color channels which results in poor quality of video.
Digital Video
Analog video
has been used for years in recording / editing studios and television
broadcasting. For the purpose of incorporating video content in multimedia
production video needs to be converted into the digital format.
It has already
been mentioned that processing digital video on personal computers was very
difficult initially, firstly because of the huge file sizes required, and
secondly of the large bit rate and processing power required. Full screen video
only became a reality after advent of the Pentium-II processor together with
fast disks capable of delivering the required output. Even with these powerful
resources delivering video files was difficult until the reduction in prices of
compression hardware and software. Compression helped to reduce the size of
video files to a great extent which required a lower bit-rate to transfer them
over communication buses. Nowadays video is rarely viewed in; the uncompressed
form unless there is specific reason for doing so, e.g. to maintain the high
quality, as for medical analysis.
Digitizing
video in general requires a, video capture card and associated recording
software. The capture card is usually installed at the PC end which accepts
analog video from a source device and converts it into a digital file using
recording software. Alternatively the capture card can be inside a digital
video camera which is capable of producing a digital video output and recording
it onto a tape. The digital output from a digital video camera can also be fed
to a PC after necessary format conversion.
Basics of Video
Of all the
multimedia elements, video places the highest performance demand on your
computer and its memory and storage. Consider that a high-quality color still
image on a computer screen could require as much as a megabyte of storage
memory. Multiply this by 30-the number of times per second that still picture
is replaced to provide the appearance of motion-and you would need 30 megabytes
of storage to play your video for one second, or 1.8 gigabytes of storage for a
minute. Just moving these entire pictures from computer memory to the screen at
that rate would challenge the processing capability of a super- computer.
Multimedia technologies and re-search efforts today deal with compressing
digital video image data into manageable streams of information so that a
massive amount of image can be squeezed into a comparatively small data file
that still delivers a good viewing experience on the intended viewing platform
during playback.
Carefully
planned, well-executed video clips can make a dramatic difference in a
multimedia project.
Using Video On PC
Analog video
needs to be converted to the digital format before it can be displayed on a PC
screen. The procedure for conversion involves two types of devices- source
devices and capture devices.
The source and
source device can be one of the following:
camcorder with
pre-recorded video tape
VCP with
pre-recorded video tape
Video camera
with live footage.
We need Video
capture card to convert analog signal to digital signal along with video
capture Software such as AVI capture, AVI to MPEG Converter, MPEG capture, DAT
to MPEG Converter or MPEG Editor.
DirectX
Microsoft changed the entire multimedia standards game with
its DirectX standard in Windows 95.
The idea was that DirectX offered a load of commands, also
known as APIs, which did things like "make a sound on the left" or
"draw a sphere in front". Games would then simply make DirectX calls
and the hardware manufacturers would have to ensure their sound and graphics
card drivers understood them.
The audio portion of DirectX 1 and 2 was called DirectSound,
and this offered basic stereo left and right panning effects. As with other
DirectX components, this enabled software developers to write directly to any
DirectX-compatible sound card with multiple audio streams, while utilizing 3D
audio effects. Each audio channel can be treated individually, supporting
multiple sampling rates and the ability to add software-based effects.
DirectSound itself acts as a sound-mixing engine, using system RAM to hold the
different audio streams in play for the few milliseconds they must wait before
being mixed and sent on to the sound card. Under ideal conditions, DirectSound
can mix and output the requested sounds in as little as 20 milliseconds.
DirectX 3 introduced DirectSound3D (DS3D) which offered a
range of commands to place a sound anywhere in 3D space. This was known as
positional audio, and required significant processing power. Sadly we had to
wait for DirectX 5 before Microsoft allowed DS3D to be accelerated by
third-party hardware, reducing the stress on the main system CPU. DirectX 6
supported DirectMusic, offering increased versatility in composing music for
games and other applications.
DS3D positional audio is one of the features supported by the
latest generation of PCI sound cards. Simply put, positional audio manipulates
the characteristics of sounds to make them seem to come from a specific
direction, such as from behind or from far to the left. DirectSound3D gives
game developers a set of API commands they can use to
position audio elements. Furthermore, as with much of DirectX, DirectSound3D is
scaleable: if an application asks for positional effects and no hardware
support for such effects is found, then DirectSound3D will provide the necessary
software to offer the positional effect, using the CPU for processing.
DS3D may have supported positional audio, but it didn't offer
much support for adding reverb, let alone considering individual reflections,
to simulate different environments. Fortunately DS3D does support extensions to
the API, and this need was soon met by a couple of new sound standards which
have gained widespread support from games developers: Aureal's A3D technology and Creative
Technology's Environmental Audio Extensions (EAX).
Broadcast video standards
Three analog broadcast video
standards are commonly in use around the world: NTSC, PAL, and SECAM. In the
United States, the NTSC standard is being phased out, replaced by ATSC digital
television standard. Because these standards and formats are not easily
interchangeable, it is important to know where your multimedia project will be
used. A video cassette recorded in USA which uses NTSC will not play on a
television set in any European country (which uses either PAL or SECAM), even
thought the recording method and style of the cassette is “VHS”. Likewise, tapes recorded in European PAL or
SECAM formats will not play back on an NTSC video cassette recorder. Each
system is based on a different standard that defines the way information is
encoded to produce the electronics signal that ultimately creates a television
picture. Multi-format VCRs can play back all three standards but typically
cannot dub from one standard to another; dubbing between standards still
require high-end specialized equipment.
National Television Standard
Committee (NTSC)
The United
States, Canada, Mexico, Japan, and many other countries use a system for
broadcasting and displaying video that is based upon the specifications set
forth by the 1952 National Television Standards Committee. These standards
define a method for encoding information into the electronic signal that
ultimately creates a television picture. As specified by the NTSC standard, a
single frame of video is made up of 525 horizontal scan lines drawn onto the
inside face of a phosphor-coated picture tube every 30th of a second by a
fast-moving electron beam. The drawing occurs so fast that your eye perceives
the image as stable. The electron beam actually makes two passes as it draws a
single video frame, first laying down all the odd-numbered lines, then all the
even-numbered lines. Each of these passes (which happen at a rate of 60 per
second, or 60 Hz) paints a field, and the two fields are combined to create a
single frame at a rate of 30 frames per second (fps). (Technically, the speed
is actually 29.97 Hz.) This process of building a single frame from two fields
is called interlacing, a technique that helps to prevent flicker on television
screens. Computer monitors use a different progressive-scan technology, and
draw the lines of an entire frame in a single pass, without interlacing them
and without flicker.
Phase Alternate Line (PAL)
The Phase Alternate Line (PAL)
system is used in the United Kingdom, Western Europe, Australia, South Africa,
China, and South America. PAL increases the screen resolution to 625 horizontal
lines, but slows the scan rate to 25 frames per second. As with NTSC, the even
and odd lines are interlaced, each field taking 1/50th of a second to draw (50
Hz).
Sequential Color and Memory
(SECAM)
The Sequential Color and Memory
is used in France, Eastern Europe, the former USSR, and a few other countries.
Although SECAM is a 625-line, 50 Hz system, it differs greatly from both the
NTSC and the PAL color systems in its basic technology and broadcast method.
Often, however, TV sets sold in Europe utilize dual components and can handle
both PAL and SECAM systems.
Advanced
Television Standard Committee (ATSC) and Digital Television (DTV)
What started as
the High Definition Television (HDTV) initiative of the Federal Communications
Commission in the 1980s, changed first to the Advanced Television (ATV)
initiative and then finished as the Digital Television (DTV) initiative by the
time the FCC announced the change in 1996. This standard, slightly modified
from the Digital Television Standard and Digital Audio Compression Standard,
moves U.S. television from an analog to digital standard and provides TV
stations with sufficient bandwidth to present four or five Standard Television
signals (STV, providing the NTSC's resolution of 525 lines with a 3:4 aspect
ratio, but in a digital signal) or one HDTV signal (providing 1,080 lines of
resolution with a movie screen's 16:9 aspect ratio). More significantly for
multimedia producers, this emerging standard allows for transmission of data to
computers and for new ATV interactive services. As of May 2003, 1,587 TV
stations in the United States (94 percent) had been granted a DTV construction
permit or license. Among those, 1,081 stations were actually broadcasting a DTV
signal, almost all simultaneously-casting their regular TV signal. According to
the current schedule, all the stations are to cease broadcasting on their
analog channel and completely switch to a digital signal by 2006.
High
Definition Television (HDTV)
It
provides high resolution in 16: 9 aspect ratios. This aspect ratio allows the
viewing of Cinemascope and Panavision movies. There is contention between the
broadcast and computer industries about whether to use interlacing or
progressive-scan technologies. The broadcast industry has promulgate an
ultra-high-resolution, 1920xl080 interlaced format to become the cornerstone of
a new generation of high-end entertainment centers, but the computer industry
would like to settle on a 1280x720 progressive-scan system for HDTV. While the
1920xl080 format provides more pixel than the 1280x720 standard, the refresh
rates are quite different. This higher-resolution interlaced format delivers
only half the picture every 1/60 of a second, and because of the interlacing,
on highly detailed images there is a great deal of screen flicker at 30 Hz. The
computer people argue that the picture quality at 1280x720 is superior and
steady. Both formats have been included in the HDTV standard by the Advanced
Television System!
Today's
multimedia monitors typically use a screen pixel ratio of 4:3 (800x600), but
the new HDTV standard specifies a ratio of 16:9 (1280x720), much wider than
tall. There is no easy way to stretch and shrink existing graphics material to
this new aspect ratio, so new multimedia design and interface principles will
need to be developed for HDTV presentations.
Digitization of Analog Video to Digital Video
Video, like
sound, is usually recorded and played as an analog signal. It must therefore be
digitized in order to be incorporated into a multimedia title. A video source,
such as a video camera, VCR, TV, or videodisc, is connected to a video capture
card in a computer. As the video source is played, the analog signal is sent to
the video card and converted into a digital file that is stored on the hard
drive. At the same time, the sound
from the video source is also digitized.
One of the
advantages of digitized video is that it can be easily edited. Analog video,
such as a videotape is linear; there is a beginning, middle, and end. If you
want to edit it, you need to continually rewind, pause, and fast-forward the
tape to display the desired frames. Digitized video, on the other hand, allows
random access to any part of the Video, and editing can be as easy as the
cut-and-paste process in a word processing program. In addition, adding special
effects such as fly-in titles and transitions is relatively simple.
Introduction to Video Compression Technique
Because of the
large sizes associated with video files, video compression, decompression
programs, known as codecs, have been developed. These programs can
substantially reduce the size of video files, which means that more video can
fit on a single CD and that the speed of transferring video from a CD to the
computer can be increased. There are two types of compression: loss less and
lossy. Lossy compression actually eliminates some of the data in the image and
therefore provides greater compression ratios than lossless compression. When
the compression ratio is made high the quality of decompressed image becomes
poor. Thus, the trade-off is file size versus image quality. Lossy compression
is applied to video because some drop in the quality is not noticeable in
moving images.
Certain
standards have been established for compression programs, including JPEG (Joint
Photographic Experts Groups) and MPEG (Motion Picture Experts Group). Both of
these programs reduce the file size of graphic images by eliminating redundant
information. Often areas of an image (especially backgrounds) contain similar
information. JPEG compression identifies these areas and stores them as blocks
of pixels instead of pixel by pixel, thus reducing the amount of, information
needed to store the image. Compression rations of 20: 1 can be achieved without
substantially affecting image quality. A 20: 1 compression ratio would reduce a
1 MB file to only 50 KB.
MPEG
adds another process to the still-image compression when working with video.
MPEG looks for the changes in the image from frame to frame. Key frames are
identified every few frames, and the changes that occur from key frame to key
frame are recorded.
MPEG
can provide greater compression ratios than JPEG, but it requires hardware (a
card inserted in the computer) that is not needed for JPEG compression. This
limits the use of MPEG compression for multimedia titles, because MPEG cards
are not standard on the typical multimedia playback system.
Two
widely used video compression software programs are Apple's QuickTime (and
Quicklime for Windows) and Microsoft's Video for Windows. Quicklime is popular
because it runs on both, Apple and Windows-based computers. It uses lossy
compression coding and can achieve ratios of 5: 1 to 25: 1. Video for Windows
uses a format called Audio Video Interleave (AVI) which, like QuickTime,
synchronizes the sound and, motion of a video file. In this example, the
unchanging background is stored only once every 15 frames; only the moving
spaceship is recorded frame by frame.
Video Formats
The AVI Format
The AVI (Audio Video Interleave)
format was developed by Microsoft.The AVI format is supported by all computers
running Windows, and by all the most popular web browsers. It is a very common
format on the Internet, but not always possible to play on non-Windows
computers. Videos stored in the AVI format have the extension .avi.
The Windows Media Format
The Windows Media format is
developed by Microsoft. Windows Media is a common format on the Internet, but
Windows Media movies cannot be played on non-Windows computer without an extra
(free) component installed. Some later Windows Media movies cannot play at all
on non-Windows computers because no player is available. Videos stored in the
Windows Media format have the extension .wmv.
The MPEG Format
The MPEG (Moving Pictures Expert
Group) format is the most popular format on the Internet. It is cross-platform,
and supported by all the most popular web browsers. Videos stored in the
MPEG format have the extension .mpg or .mpeg.
The QuickTime Format
The QuickTime format is developed
by Apple. QuickTime is a common format on the Internet, but QuickTime movies
cannot be played on a Windows computer without an extra (free) component
installed. Videos stored in the QuickTime format have the extension .mov.
The RealVideo Format
The RealVideo format was
developed for the Internet by Real Media. The format allows streaming of video
(on-line video, Internet TV) with low bandwidths. Because of the low bandwidth
priority, quality is often reduced.
Videos stored in the RealVideo
format have the extension .rm or .ram.
The Shockwave (Flash) Format
The Shockwave format was
developed by Macromedia. The Shockwave format requires an extra component to
play. This component comes preinstalled with the latest versions of Netscape and
Internet Explorer.
Videos stored in the Shockwave
format have the extension .swf.
Software for Capturing and Editing Videos
Several steps
are needed to prepare video to be incorporated into a multi- media title. These
include capturing and digitizing the video from some video source, such as a
video camera, VCR, TV, or videodisc; editing the digitized video; and
compressing the video. Some software programs specialize in one or the other of
these steps, and other programs, such as Adobe Premiere can perform all of
them. Although capturing and compressing are necessary, it is editing that
receives the most attention. Editing digitize video is similar to editing
analog video, except that it is easier. For one this it is much quicker to
access frames in digital form than in analog. For example, with analog video, a
lot of time is spent fast-forwarding a rewinding the videotape to locate the
desired frames; whereas with digital editing you can quickly jump from the
first frame to the last-or anywhere in between. Removing frames or moving them
to another location is as easy as the cut-and-paste process in a word
processing program. The following are some other features that may be included
in editing software programs:
Incorporating
transitions such as dissolves, wipes, and spins
Superimposing titles and animating them, such as a fly-in logo
Applying special effects to various images, such as twisting, zooming,
rotating, and distorting
Synchronizing
sound with the video
Applying
filters that control color balance, brightness and contrast, blurring,
distortions, and morphing
Introduction
to Video Capture Media and Instruments:
Digital video cameras come in two different
image capture formats: interlaced
and progressive scan. Interlaced cameras record the
image in alternating sets of lines: the odd-numbered lines are scanned, and
then the even-numbered lines are scanned, then the odd-numbered lines are
scanned again, and so on. One set of odd or even lines is referred to as a
"field", and a consecutive pairing of two fields of opposite parity
is called a frame.
A progressive scanning digital video camera
records each frame as distinct, with both fields being identical. Thus,
interlaced video captures twice as many fields per second as progressive video
does when both operate at the same number of frames per
second.
Progressive scan camcorders are generally
more desirable because of the similarities they share with film. They both
record frames progressively, which results in a crisper image. They can both
shoot at 24 frames per second, which results in motion strobing (blurring of
the subject when fast movement occurs). Thus, progressive scanning video
cameras tend to be more expensive than their interlaced counterparts
Standard film stocks
such as 16 mm and 35 mm
record at 24 frames per second. For video, there are
two frame rate standards: NTSC,
and PAL, which shoot at
30/1.001 (about 29.97) frames per second and 25 frames per second,
respectively.
Digital video can be copied with no
degradation in quality. No matter how many generations a digital source is
copied, it will be as clear as the original first generation of digital
footage.
Digital video can be processed and edited
on an NLE, or non-linear editing
station, a device built exclusively to edit video and audio.
These frequently can import from analog as well as digital sources, but are not
intended to do anything other than edit videos. Digital video can also be
edited on a personal computer which has the proper hardware and software. Using
an NLE station, digital video can be manipulated to follow an order, or
sequence, of video clips.
Digital video is used outside of movie
making. Digital television (including higher
quality HDTV) started to spread in most developed
countries in early 2000s. Digital video is also used in modern mobile phones and video conferencing systems. Digital video
is also used for Internet
distribution of media, including streaming video.
Many types of video
compression exist for serving digital video over the internet, and
onto DVDs. Although digital technique allows for a wide variety of edit
effects, most common is the hard cut and an editable video format like DV-video
allows repeated cutting without loss of quality, because any compression across
frames is lossless. While DV video is not compressed beyond its own codec while
editing, the file sizes that result are not practical for delivery onto optical
discs or over the internet, with codecs such as the Windows Media format,
MPEG2, MPEG4, Real Media, the more recent H.264, and the Sorenson media codec.
Probably the most widely used formats for delivering video over the internet
are MPEG4 and Windows Media, while MPEG2 is used almost exclusively for DVDs,
providing an exceptional image in minimal size but resulting in a high level of
CPU consumption to decompress.
In analog
systems, the video signal from the camera is delivered to the Video In
connector(s) of a VCR, where it is recorded on magnetic video tape. A camcorder
combines both camera and tape recorder in a single device. One or two channels
of sound may also be recorded on the video tape (mono or stereo). The video
signal is written to tape by a spinning recording head that changes the local
magnetic properties of the tape's surface in a series of long diagonal stripes.
Because the head is tilted at a slight angle compared with the path of the
tape, it follows a helical (spiral) path, which is called helical scan
recording. Each stripe represents information for one field of a video frame. A
single video frame is made up of two fields that are interlaced. Audio is
recorded 0n a separate straight- line track at the top of the videotape,
although with some recording systems, sound is recorded helically between the
video tracks. At the bottom of the tape is a control track containing the
pulses used to regulate speed. Tracking is fine adjustment of the tape so that
the tracks are properly aligned as the tape moves across the playback head.
This is how your VCR works.
In
digital systems, the video signal from the camera is first digitized as a
single frame, and the data is compressed before it is written to the tape in
one of several proprietary and competing formats: DV, DVCPRO, or DVCAM.
Video-capture capability is not confined to
camcorders. Cellphones, digital
single lens reflex and compact digicams,
laptops, and personal
media players frequently offer some form of video-capture capability. In
general, these multipurpose-devices offer less functionality for video-capture,
than a traditional camcorder. The absence of manual adjustments, external-audio
input, and even basic usability functions (such as autofocus and lens-zoom) are
common limitations. More importantly, few can capture to standard TV-video
formats (480p60, 720p60, 1080i30), and instead record in either non-TV
resolutions (320x240, 640x480, etc.) or slower frame-rates (15fps, 30fps.)
Different type of storage media for
Video
Some recent camcorders record video on flash memory devices, Microdrives, small hard disks, and size-reduced DVD-RAM or DVD-Rs using MPEG-1, MPEG-2 or MPEG-4 formats. Most other
digital consumer camcorders record in DV
or HDV format on tape and
transfer content over FireWire.
Camcorders are often classified by their storage device: VHS, VHS-C, Betamax, Video8 are examples of
older, videotape-based camcorders which record video in analog form. Newer
camcorders include Digital8,
MiniDV, DVD, Hard Disk
and solid-state (flash) semiconductor
memory, which all record video in digital
form. (Please see the digital video page for
details.) In older digital camcorders, the imager-chip, the CCD was considered an
analog component, so the digital namesake is in reference to the camcorder's
processing and recording of the video. Many next generation camcorders use a CMOS imager, which
register photons as binary data as soon as the photons hit the imager
The IEEE 1394 interface
It is a serial bus
interface standard for high-speed
communications and isochronous
real-time data transfer, frequently used by personal
computers, as well as in digital audio,
digital video, automotive, and aeronautics applications.
The interface is also known by the brand names of FireWire (Apple), i.LINK (Sony), and Lynx (Texas Instruments). IEEE 1394 replaced parallel SCSI in many applications, because of
lower implementation
costs and a simplified, more adaptable cabling system.
IEEE 1394 was adopted as the High-Definition Audio-Video Network Alliance
(HANA) standard connection interface for A/V (audio/visual) component
communication and control.FireWire is also available in wireless, fiber optic, and coaxial
versions using the isochronous protocols.
Nearly all digital camcorders have included
a four-circuit 1394 interface, though, except for premium models, such
inclusion is becoming less common. It remains the primary transfer mechanism
for high end professional audio and video equipment. Since 2003 many computers
intended for home or professional audio/video use have built-in FireWire/i.LINK
ports, especially prevalent with Sony and Apple's computers. The legacy (alpha)
1394 port is also available on premium retail motherboards.
A Camcorder
A camcorder (video CAMera reCORDER) is an
electronic device that combines a video camera
and a video recorder into one unit. Equipment
manufacturers do not seem to have strict guidelines for the term usage.
Marketing materials may present a video recording device as a camcorder, but
the delivery package would identify content as video camera recorder.
In order to differentiate a camcorder from
other devices that are capable of recording video, like cell phones and compact
digital cameras, a camcorder is generally identified as a portable device
having video capture and recording as its primary function.
The earliest camcorders employed analog
recording onto videotape.
Since the 1990s digital recording has become the norm, but tape remained the
primary recording media. Starting from early 2000s tape as storage media is
being gradually replaced with tapeless solutions like optical disks, hard disk
drives and flash memory.
All tape-based camcorders use removable
media in form of video cassettes. Camcorders that do not use magnetic tape are
often called tapeless camcorders and may
use optical discs (removable), solid-state flash memory
(removable or built-in) or a hard disk drive (removable
or built-in).
Camcorders that permit using more than one
type of media, like built-in hard disk drive and memory card, are often called
hybrid camcorders
Video cameras
originally designed for television
broadcast were large and
heavy, mounted on special pedestals, and wired to remote recorders located in
separate rooms.
As technology advanced, out-of-studio video
recording was made possible by means of compact video cameras and portable video recorders. The recording unit could be
detached from the camera and carried to a shooting location. While the camera
itself could be quite compact, the fact that a separate recorder had to be
carried along made on-location shooting a two-man job. Specialized video cassette recorders were
introduced by both JVC (VHS)
and Sony (Umatic &
Betamax) to be used for mobile work.
In 1982 Sony released the Betacam system. A part of
this system was a single camera-recorder unit, which eliminated the cable
between camera and recorder and dramatically improved the freedom of a
cameraman. Betacam quickly became the standard for both news-gathering and
in-studio video editing.
In 1983 Sony released the first consumer
camcorder - the Betamovie BMC-100P. It used a Betamax cassette and could
not be held with one hand, so it was typically resting on a shoulder. In the
same year JVC released the first camcorder based on VHS-C format. In 1985 Sony
came up with its own compact video cassette format — Video8. Both formats had
their benefits and drawbacks, and neither won the format war.
In 1985, Panasonic,
RCA, and Hitachi began producing
camcorders that recorded to full-sized VHS cassette and offered up to 3 hours
of record time. These shoulders mount camcorders found use for industrial
videographers, and college TV studios. Super VHS full-sized camcorders were
released in 1987 which exceeded broadcast quality and provided an inexpensive
way to collect news segments or videographies.
In 1986 Sony introduced the first digital
video format, D1. Video was recorded in uncompressed form and required enormous
bandwidth for its time. In 1992 Ampex used D1 form-factor to create DCT, the
first digital video format that utilized data
compression. The compression utilized discrete
cosine transform algorithm, which is used in most modern commercial
digital video formats.
In 1995 Sony, JVC, Panasonic and other
video camera manufacturers launched DV.
Its variant using a smaller MiniDV
cassette quickly became a de-facto standard for home and semi-professional
video production, for independent filmmaking and for citizen journalism.
In 2000 Panasonic launched DVCPRO HD,
expanding DV codec to support high definition. The format was intended for use
in professional camcorders and used full-size DVCPRO cassettes. In 2003 Sony,
JVC, Canon and Sharp introduced HDV,
the first truly affordable high definition video format, which used inexpensive
MiniDV cassettes.
In 2003 Sony pioneered XDCAM, the first tapeless
video format, which uses Professional Disc as
recording media. Panasonic followed next year, offering P2 solid state memory
cards as recording medium for DVCPRO HD video.
In 2006 Panasonic and Sony introduced AVCHD as an inexpensive
consumer-grade tapeless high definition video format. Presently AVCHD
camcorders are manufactured by Sony, Panasonic, Canon, JVC and Hitachi.
In 2007 Sony introduced XDCAM EX, which offers
similar recording modes to XDCAM HD,
but records on SxS
memory cards.
With proliferation of file-based digital
formats the relationship between recording media and recording format became
weaker than ever: the same video can be recorded onto different media. With
tapeless formats, recording media has become a storage device for digital
files, signifying convergence of video and computer industries.
DVCAM
Sony's DVCAM is a professional variant of
the DV standard that uses the same cassettes as DV and MiniDV, but transports the
tape 50% faster having 50% wider track, 15 micrometres instead of 10
micrometres. This variant uses the same codec as regular DV, however, the wider
track lowers the chances of dropout errors. The LP mode of consumer DV is not
supported. All DVCAM recorders and cameras can play back DV material, but DVCPRO support was only
recently added to some models like DSR-1800, DSR-2000, DSR-1600. DVCAM tapes
(or DV tapes recorded in DVCAM mode) have their recording time reduced by one
third.
Because of the wider track, DVCAM has the
ability to do a frame accurate insert tape edit. DV will vary by a few frames
on each edit compared to the preview. Another feature of DVCAM is locked audio.
If several generations of copies are made on DV, the audio sync may drift. On
DVCam this does not happe
Video Capture Card
Video Capture
Card is essentially an expansion board
that can handle a variety of different audio and video input signals and
convert them into analog to digital or vice versa. Rendering support for a
variety of TV signal formats, e.g. NTSC, PAL, SECAM, imposes a level of
complexity in the design together with recently introduced standards for HDTV.
A typical circuit board consists of the following components:
Video INPUT
port to accept the video input signals from NTSC/PAL/SECAM broadcast signals,
video camera or VCR The input port may conform to the composite-video or
S-video standards.
Video
compression-decompression hardware for video data.
Audio
compression-decompression hardware for audio data.
A/D converter
to convert the analog input video signals to digital form.
What is DV?
As you can guess, DV stands for
"Digital Video". It is the new high resolution digital video
standard.
DV is compressed at the camera, on the tape
itself. The camcorder has the DV "codec" built in.
The DV spec is a 720x480 image size with a
5:1 compression. DV video information is carried in a nominal 25 megabit per
second data stream. The color information is sampled at 4:1:1 for NTSC, and
4:2:0 for PAL.
Unlike MJPEG compressed video, DV video
can't be scaled. You can't lower the screen size, change the screen size or
data rate.
DV format is typically reckoned to be equal
to or slightly better than Betacam SP or MII in terms of picture quality. Two
types of DV camcorders, DVCAM and DVCPRO, are widely used in TV industry today.
However, for most of us, DV often refers to
MiniDV actually. MiniDV is just the home level DV format. It is compressed to a
constant throughput of 3,600 kilobytes per second. The video quality is not as
good as Betacam, but much better than S-video.
What is FireWire?
Technically, it is the high speed, short
distance data transfer protocol IEEE1394. Apple didn’t like the numbers and so
called it "FireWire". Sony didn’t like it either, and so they called
it "iLink". And they are all the same thing.
When the FireWire concept was first
announced a few years ago, it was envisioned that it would become a new
standard that would replace SCSI and link all our consumer electronics
equipment and computers together. Now, the dust has settled and the hype has
died down. The only application for FireWire that has actually come to fruition
is for transferring digital video (DV) information directly from a camcorder
(or VCR) to your hard drive.
What's the difference between DV and
FireWire?
DV is the actual format of the video.
FireWire is the port and protocol that lets
you transfer the DV data to your computer. The full FireWire spec includes
frame accurate device control and the ability to read and write the digital
video.
When the video goes through the 1394 cable,
into the capture card, and onto the hard drive, nothing is done to the video.
It is a digital copy. It's identical to the original. And this is really nice.
How's the quality of DV?
The DV (MiniDV) spec is a 720x480 image
size, at roughly a 5:1 compression. More accurately, it is compressed at a
constant throughput of 3600 kilobytes per second which averages out to 5:1
compression.
The images are crisp, bright and have
excellent depth and contrast. In general, it's acceptable even in TV stations.
Best of all, the information is stored on
the video tape in digital form, so it can be copied over and over without any
loss.
This comment has been removed by the author.
ReplyDeleteIt was very useful. There are lots of benefits of converting videotapes to digital format. It like Easier to view, Improved accessibility, and is easier to share. You will get all services related to media transfers at All Media Transfers. I want to get information for Convert Video Tapes to Digital.
ReplyDelete