Retro-fitting DVRs with a usb port

DVRs began as a way of digitizing analog video signals from aerial broadcasts, they evolved to digitize VHS signals from tapes and personal camcorders to optical disc media. Because commerical movies were released on the same optical and aerial mediums, right owners weighed in and impressed upon the designs, varying methods of protecting copyright.

Consumer video products have long since moved on from Standard Definition (SD) video products, but the older analog signals captured on personal tape based recorder products remains. Regardless of the rights management issues which have withered away and made digitiziation more difficult.. the lack of a method to even extract the MPEG2 stream from a video to optical DVD burn has consigned many DVRs to landfills or abandonment.

Many brands of video recorder have at one time or another used commodity optical disc "DVD-R" burners, which almost universally rely upon the ATA Programming Interface (ATAPI) to conduct a recording session over an IDE (PATA) or SATA bus. These are not new designs, and are well documented. The signaling cables are standardized.. and although there was flirtation with removing the microcontroller unit managing the IDE bus from the drive motherboard and placing it closer to the DVR main motherboard.. integrating it or placing it on a daughter card.. in later years.. often the signal paths remained accessible down closer to the mainboard.

That means with exceptions.. many designs had a common internal IDE signal bus, with a max speed of 25 MHz for UltraDMA100 and often ran much slower.

In fact the CD and DVD xSpeed standards would often run only at the speed of that negotiated for a particular DVD-R burner drive and for the most part remains x8 or lower for stability and due to the speeds available to cost constrained microprocessor equipment up to about 2006.. although the equipment might run into the $100s or $1000s of dollars.. the tech was simply much slower than today.

Enter the 8051 and CY8C5 generation of dedicated real-time microprocessors driven by the phone industry and other evolutionary pressures. They are much cheaper and faster than earlier 2006 cost constrained microprocessors in the DVRs. Its possible a modern mcu could be used to emulate a device on the existing IDE bus by "learning" the signal conversation and then extract the MPEG2 stream destined for the optical media as a stream of ATA packet commands transfering data, then directing that over a USB 2.0 bus to an external computer, iSCSI device, USB drive or a USB DVD-R burner of a modern design from a third party.

The small size and near complete SoC implementation on prototype boards from Cypress Semiconductor for $10 in single quantities makes it almost an excercise in software only.. with a few custom cabling requirements.. and choices over wireless or sometype of re-housed external port exposure through a faceplate.

Re-implmenting a near 30 year old IDE bus in a real time mcu using C code is no small task, but it doesn't seem insurmountable given that the bus has been thoroughly documented.. and the ATA PI interface is on the whole based on SCSI and a relatively small command set of about 40 words.

The benefits would be the continued usefulness of these aging devices for their original purpose and some possible retention of our media history.


Y/C Combs, DNR and Twin Perfect

Television video signal is a combination of Luminance (Y) and Chroma (C) information into one signal. A Composite (or combined) version of these two signals on a single set of wires is called a video signal, it includes (no) audio or sound information.

Normally the seperation between the Y and C components of the video signal are distinct enough to recreate the video without error.. however the signal degrades over long wires as the Chroma information "smears" into the Luminance information effecting picture quality.

To preserve the Y and C components over long wires or poor quality wiring cables, it is better to keep the two seperate on two distinct signal pair wire sets. The S-Video standard was created to do this, and includes four total wires that are two sets of wire pairs. One pair carries Y signal, One pair carries C signal.

S-Video is a "wiring standard" and really has nothing to do with the "S" in S-VHS.

The "S" in S-VHS stood for "Super" and indicated more horizontal dot resolution or perceived Television Vertical Line (counts) also known as "TVL" .

Strictly speaking.. a Black & White picture of only Luminance information could be S-VHS and would have zero benefit over an S-Video cable.. they are entirely two different things.

S-VHS is about horizontal (across the scanline) dot resolution

S-Video is about (preserving) accurate Color information that might be lost and might destroy perceived horizontal dot resolution in the process of carrying the signals a short distance from the VHS player to the Television.

People often confuse or conflate the actual meaning of the two by saying one may effect the other.. in the final result.. the picture..  which is true.. but for different physical reasons that only (sound) like they are related.. in reality they are unrelated.

A Comb filter is used to "extract" the Y from the C information from a "Composite" video signal.

The video signal is (stored) on the Tape in a "Composite" signal format.

All S-Video VHS players have a Comb filter.

When the Tape is played back the Composite signal extracted from the Tape can be handled in one of two ways.

The Composite signal can be placed on a single wire pair and output to Television over a Composite connector, or the  Composite signal can be [broken down using a Comb filter] into a seperate Y and C signal and put individually on seperate wire pairs and output over an S-Video connector.

The Composite connector will provide a better signal to a Television than an RF connection. The Television will have less signal losses and produce a better picture.

The S-Video connector will provide a similar, but even better picture because there will be less Chroma crosstalk with the Luminance signals over the length of the connector from the VHS player to the Television.

Comb filters also offer the "opportunity" to improve the signal quality with filters and amplifiers tuned and customized to work on Luminance (Y) and Chroma (C) signals seperately which exist at different frequencies and are vulnerable to degredation in different ways. Analog and Digital "noise reduction" can be used to "process" the video signal recovered from the tape before it is output to the Television.. and this is called a [ processing amplifier ] task performed by a [proc-amp] for short.

Digital Noise Reduction (DNR) is both cheaper and considered more precise than analog noise reduction which can have non-linear characteristics (which are difficult to describe and teach how to use). Non-linear noise reduction is harder to repair or reproduce in similar or duplicate circuits.

Not all VHS players have NR or DNR circuits, as its considered a more expensive and premium feature.

Another way to improve VHS playback is to tune the tracking and control circuits based on the type of tape inserted into the system.

Mitsubishi pioneered a technique called "Perfect Tape" or "Twin Perfect".

Originally intended for "preparing" the VHS recorder function, it "samples" any tape inserted into the VHS recorder/player which does not have its write-protect tab broken off.. in anticipation that it may be used for making a new recording. By doing this it can configure or optimize various circuits in the VHS player to make the best use of the Tape provided and make the strongest recording possible. On playback it similarly is optimized to extract the best signal possible.

The on screen display for a Mitsubishi VHS player with this feature can be used to manually engage or disengage this feature on demand regardless of the state of the write-protect tab.

Mitsubishi VHS players also have a direct drive method of fast forward or fast rewind called "Turbo Drive" which was used to reduce the time a consumer was required to wait for a Tape to be wound or rewound for return to a rental store.

VHS drum heads, why so many

A VHS player was originally a rotating cylindrical drum spinning on an axis at a canted angle to a video tape pulled through a tape path. It had a seperate erase head before the drum and seperate audio head and speed signal control head after the drum.

The erase head simply "cleaned" the tape of any magnetic patterns before the tape was used by the drum heads to lay down new video information in angled or tilted tracks across the width of the tape. Each track represented one scanline across the width of the Television picture.

After the scanlines were recorded to the tape, the audio head would "skim" a small width at the edge of each scan line for its use to store sound information across the Left side of each scanline, and the Control track "skimmed" a small width at the opposite edge for its use to store tracking information across the Right edge of each scanline.

On tape this meant the very Top edge of the tape had audio information, and the very Bottom edge had control tracking information.

This didn't effect the picture much because normally those edges on the Left and Right of the Television image are hidden away under a varying sized bezeled picture frame around the central viewable image on the Television.

The audio track was mono (not stereo) in the first VHS standard. Dual tracks and HiFi were two different standards that would come much later. And by most relatable audio standards quite low in frequency bandwidth.

The control track was a timing signal which when played back acted as a feedback signal to the tape path drive motors to servo-regulate the speed of the tape as it moved through the system, so that scanlines and signal arrived at the playback Television at the correct rate in order to regenerate the video signal. If the signal was too slow, drive electronics sped the tape up, if too fast, drive electronics slowed it down.

Frame rate tracking, indexing or accuracy were never part of this Control Track mechanism.. certain specific manufacturers replaced this track with their own variation to encode extra information and provide either their own version of a frame or location tracking system.. or re-implemented the Society of Motion Picture and Television Engineers (SMPTE) frame accurate time code on the control track.. but this was rare and non-VHS standard.

Basically the VHS "standard" was feature-less and made to be cheap and easy to implement across many manufacturers and vendors. Broadcast quality producer level features were reserved for much more expensive and purpose built equipment.. or, non-VHS (non-Home) video equipment.

DV (for Digital Video) would later re-think and cross pollinate ideas from Broadcast features and Home video features to create a new "incompatible" video standard that would be low cost enough to be accessible to the Home video market.. mostly by way of the "Camcorder".. but it was not VHS... even though it did borrow some of its ideas to achieve its goals.

The confusion often led consumers to assume that DV was an upgrade or improved version of VHS, when actually it offered video with a different set of goals and compromises. In some ways better, others worse and definitely not media compatible.

Originally there were two video heads on the spinning drum. The M shaped tape path wrapping the tape around the drum allowed one head to complete one scanline copy to the tape angled from top to bottom along the length of its travel path. the next head would begin its traversal as the last head left the tape path and rotated out of contact around the backside of the drum.

So the minimum number of VHS player video heads was "two".

VHS had a specific tape speed, meaning the size of the video heads were fixed to optimize the size of the magnetic track based on this speed. The inital speed was called "SP"

When longer length video recordings were made possible by changing the tape "speed" the size of the video heads had to be changed to make the width of the magnetic tracks smaller.. so (two) additional heads were added for "LP" (Long Play).

And then additional heads might be required for "EP" (Extended Play).

The LP tape speed dropped out of favor and choice became SP or EP, even if SLP was advertised.. instead of changing tape speed.. the length of the tape was increased per cassette for SLP.. so tape speed was the same for SLP as EP... so in the end only two sets of head sizes were normally included developing into the (4-Head VCR as a consumer staple).

A "Flying" Erase head was also added to the drum so that the point at which an "Editing" Cut or Insert could be made closer to the actual point at which a video was stopped or "frozen" when playing and then engaging recording from a second VCR. This was a "Prosumer" feature rarely used by most people.. but made "Linear" editing near real time during playback on a VCR used for both playback and recording from other decks possible.

Previously the former Erase head was not on the drum and offset far enough that at the point where a frozen frame to new recorded video might overlap, or include "magnetic" bleed through of signal from a previous recording or random noise on the tape with a pattern.. leading to chroma aberations at the insert point. By moving the erase head closer to the actual recording head this problem could be minimized.

So that added a (5th) possible head to the VCR (not counting the original Erase and Audio and Control heads that were "not" on the drum)

Finally "offical" stereo was added to VHS, by [deep] recording an opposite angled, slanted set of audio tracks at a different frequency and magnetic strength to the video tracks. This minimized crosstalk between the signals and a bandpass filter could be used to further reduce the perceived "noise" in the video signal from the audio signal in the central portion of the tape normally used only for video signal.

Although this made stereo possible, at near CD quality.. it also introduced a perceived "buzzing" or possible interference when electrically switching from one head to the opposing audio head on the drum. Further bandpass filters were used to attempt to reduce the "noise".. but circuitry degradation over time meant the buzzing could increase over the years with older equipment. A technique some people used was to switch off the stereo track and fall back on the (mono only) track recorded for backwards compatibility (unless specifically used for different content like alternative languages, or narration it was a duplicate of the stero track) at the edge of the tape.. which would not have any switching-buzz noise.

The "mono audio track" is also sometimes called the "Linear audio track".. chosing between them is a good thing, [mixing] them is usually a bad thing (if they are backwards compatibility duplicates of the same sound track)  primarily because the two are physically located at different points along the tape path..any imperfection (which is very common) will introduce a slight difference or signal delay.. that manifests itself as a (tunnel) echo effect in the audio when both tracks are being played in [mixed mode].. over time on older equipment or older tapes this effect increases.

[Mixing] the stereo and mono tracks did have a purpose however, if they contained different content.. for example, an orchestral music content recorded on the stereo tracks, and a speaker, dialogue or narration content recorded on the linear mono track. In this way it was use as a simple audio "mixer" setup, and allowed for post-production with a single vcr, often called ADR- "Automated Dialog Replacement" in the film industry.. its also known as a "looping" or "loop session" recording to improve the sound quality of dialogue. Today however this can be acomplished with much greater ease in computer software mixers for working with sound and video.

The legacy effect of "mixing" stereo and mono tracks sources that contain the same content however is not recommended.

So adding two more drum heads brought the total on the drum up to (4 + 2 + 1 = 7) for the video, audio and flying erase head.. and if LP is actually supported (6 + 2 + 1 = 9 heads) .

In the end 4 + 2 +1 was more normal and advertised as 4-head plus HiFi audio plus a "Flying Erase head".. if the product was a high-end model intended for limited Insert Linear video editing between two or more VCRs.. also called "decks"


Capturing VHS to PC, before its gone

I've been busy exploring (or re-exploring) the process of converting VHS tapes to PC files before the equipment is totally gone, or the tapes disintegrate. JVC and Funai have stopped producing VHS players and eBay is starting to run out of even used VHS machines.

I started with a simple survey of the methods and tried to pick a simple path of USB dongle to simple capture software, like Virtual Dub to computer file. But two things occurred.

1. I didn't realize how important a good VHS player was and the initial results were bad
2. I didn't know nearly enough about VHS video signals to make reasonable decisions about equipment or software

So I turned to some online forums like VideoHelp, AVSForum and DigitalFAQ

Time and again I reached a point of decision, only to collapse when I posted a summary of my efforts and learned there was still much to learn.

So a quick knee jerk decision to pick up the project turned into months of reading and correlating and embarassment online from people who seem to know better than me, because they were retired and had been in the broadcast business for many years.

Meanwhile a clock is ticking.. not only on the tapes, as they are getting older, but on the hardware and its availability.

VHS playback is a very complex thing.

To understand a good VHS player versus a marginal or bad one, you have to start with an ideal assumption of the source tape. Then imagine all the things that could go wrong, and could be handled by the choice of VHS player, or anything you insert between the player and your capture device.

You don't normally have access to a perfect tape, or perfect capture device.. and what could go wrong can only be speculated about, or told to you by more experienced people.

So starting with the near worst case, a broadcast over the air signal captured by a TV tuner, and then put on a tape.

I learned the VHS system was invented by JVC (Victor Company of Japan) and the first VHS recorder/player was the HR-3300 released in the mid 1970's.

Three major companies in Japan were working on the Home market video player. Sony,  JVC and Matsushita (aka Panasonic). Sony wanted to use the "C" method of tape lacing around a helical drum and faster tape which only recorded one hour of video. Sony offered their system to JVC and Panasonic who turned it down in favor of "M" tape lacing and slower tape to fit two hours on a tape, thinking Home users would prefer to save "Movies" on the slightly larger and expensive tapes. Most of that was academic and they both turned out to be right in different market segments.. but somewhat like Compaq vs IBM years later.. the lower cost option to the consumer and greater "choice" or "confusion" led to VHS being the most popular.

So VHS stands for "Video Home System" and JVC marketed three version of their recorders:

HR - Home Recorders
SR - Service Recorders
BR - Broadcast Recorders

There were others targeted for particular industries, but these were/are the most accessible to people today.. but are becoming scarce.

Video signal is a strange kludge of "encoding" and "compression" through hardware circuitry rather than digital processing. Somewhat like the typewriter, it had to slow things down.. because the equipment of the day was much slower than today.. so it had very limited bandwidth in which to transmit even a luminance signal... bright and dark spots on a screen.

A Television signal is basically two overlapping pictures called fields, these are transmitted one after the other.. they are taken at two different times, so there is a slight "gap" inbeween them in which motion is missed, and a difference can arise between the two pictures if displayed at the same time.

A normal Television is designed to never show both pictures at the same time. The human eyes persistence of vision while one picture is "fading" and the next one is being brought into existence and shown, leads to a phenomena where the brain "interpolates" or "automatically fills in the visual gaps".. because two pictures are being shown, but not at the same time, a full frame with all of the vertical resolution is called [Interlaced].. literally time space "woven" from (odd) and (even) lines from either picture.

So two fields make up a frame of video, and then the next frame is constructed by showing two more "Interlaced" fields.. doing this means the signal for a full field only needs one-half the bandwidth that would be needed if both pictures were interwoven together and transmitted at the same time.

It also means that although to show one frame takes 30 frames per second, it appears as if the motion is occuring at 60 frames per second.. motion resolution is preserved, even though frame rate is not.

This is important to know, because a computer screen, and modern LCD TV's display in what is called "Progressive" mode.. or one frame at a time, no fields.. if that happens the difference between the two [woven] fields into one frame become noticeable by the human eye.. a Progressive display renders the video "de-interlaced" which is bad most of the time.

A person will notice it [more] when there is faster action, or more "difference" between the two field pictures that get woven into the one "progressive" frame.. these look like Herringbone or "Mice teeth" or zig-zag "lines" around moving things in a video scene.

There are different ways of  "compensating" during capture or after capture and during conversion to a compressed file format to make an Interlaced video look "better" when it is displayed on Progressive display.. but these methods keep changing from year to year.. and the previous methods are generally regarded as "poor" compared to new methods rapidly.

Its now considered [bad] to even attempt to "squash" or "de-interlace" video that started out "interlaced" when it is captured. Better to store it interlaced and let the software displaying it in the future use modern methods.. or if the display device is capable of display interlaced video as interlaced video.. give it that opportunity. -- the old reasons for "de-interlacing" when capturing or shortly after.. have gone away.. de-interlacing "on the fly" during playback was once considered slow and CPU intense.. and of poor quality on low powered devices like cell phones.. most have specialize hardware for doing that now, and CPU power has increased exponentially.. so it is no longer an issue.

It's also important to know that capturing "Interlaced" video as "Interlaced" depends on certain "minimal" vertical resolution in the capture device. You can often configure a capture device to capture at 320x240 or 640x480 and so forth. The second number is how many vertically "stacked" horizontal lines to capture. For VHS signal the vertical resolution is set by the NTSC signal standard at 576 or about 480 (it varies because of the vertical blanking interval above and below the scanlines which may be hidden or not shown in a TV with a bezel), capturing at anything less than that and the resulting video file will not have enough information to recreate an "Interlaced" field effect which an "Interlaced" playback device can use. It would effectively have to "mush" it altogether and treat it as Progressive, with all the attendant problems that would cause.

Video signal also has a very confusing standard for measuring [horizontal "dot" resolution].

Called TVL - "television vertical lines" it sounds in english much like a reference to "the vertical axis" in a typical mathematical class X-Y coordinate system.. but it deliberately does not mean what it sounds like it means.

Rather TVL "vertical lines" means the answer to a question for a video signal that is observed on a monitor or display device. ["That is"] it answers the Question: How many Vertical Lines can you see horizontally in a video image, along a horizontal distance equal to the vertical height of a video image. -- It is [Not] "count" the total number of dots you can see across the entire "width" of a horizontal video line... but (only) the total number of dots representing the "tip top" of vertical lines runing from the top scanline to the bottom scanline across a horizontal distance defined by the vertical height of the video.. the vertical height of the video is a known quantity, its a fixed number of stacked scanlines, it does not vary even if you can't see them.

I think I know why they picked this seemingly "weird" definition, but it doesn't help new people to understand it... that have no prior experience.

First the horizontal line is not always the same length on every monitor. A display device in the years when monitors and TVs were used had a bezel.. and that "hidden" portion of the vertical stacked horizontal line was [variable]. So they "intentionally" defined the "test" or "Question" to end somewhere within the center of the line, and not necessarily starting from the Left side, like on a number line.. because the Left side could be hidden under a bezel as well. -- so much for the history lesson.

But worse, after they got this answer, they intentionally continued to refer to this answer as the "vertical resolution" because the test involved counting striped lines that ran "vertically" even though they were being used to measure "horizontal dot" resolution.

In the digital world of progressive displays, things "line up" more like the X-Y number line from mathematics, a 640x480 image is 640 pixel across, and 480 pixels up or down.

In the video world of interlaced displays, things get called weird things. First its assumed you know the frame height is the [sum] of two fields, totaling up to about 576 lines stacked one on top of the other vertically, but some of that will be lost to top and bottom "bezel" or "blanking interval". But then they refer to the "vertical resolution" as "lines", which are lines.. counted "horizontally" to give you horizontal "dot" resolution.

The end result for a VHS video image, is there are about 480 vertically stacked horizontal lines, and about 240 horizontally aligned (like dominoes) dots on each single vertically stacked line.

To put it another way, in the digital world perspective, the resolution is about 240x480 for a VHS video signal. (For a Black & White picture)

Color images use the same bandwidth, but a subcarrier to bring [chroma] information along to decorate that same line with color information, in televisions or displays that know how to use the extra information. So if you want to capture the color information along with the b&w [luma] information, you have to take more samples from the same line.

Televisions displayed color by firing signal at triple dots on each horizontal line, red-green-blue, in various arrangements, linear, circular cluster, ect.. but to be effective from a distance they had to appear as [one] colored dot. To a modern digital display that larger dot appears as [one] color.

But in order to capture all that information from a line of dots that may or may not be colored, the capture device must "sample" the line [three times] as much. So even though the horizontal dot resolution is 240 [luminal], to capture in color, you must sample at 240 x 3 = 720 [luma+chroma] for an overall capture resolution of around 720x480 to get the entire frame.

A "poorer" resolution signal, like with the limited bandwidth on a VHS tape would be limited to less than 240 TVL (vertical tick lines crossing the horizontal axis) leading to something like 200 x  480 insterad of 240 x 480 for a broadcast signal. So to capture a VHS signal, even accounting for color information (200 x 3 = 600) x 480 which means a 640 x 480 capture setting for a VHS signal digitizer is usually (more than) enough sample resolution to capture (all the signal that there is) coming from a VHS player.

S-VHS was a different video standard which some people could afford, if they had better tape quality, a better signal source, and a VCR that could record in S-VHS to an S-VHS tape.. not a common occurence, but S-VHS VCRs eventually became the norm. And when forced to record broadcast as S-VHS on S-VHS tape, it did look slightly better. More often though people didn't have S-VHS recordings even when they did buy S-VHS tape because they never bothered to force it to record in S-VHS mode.

It is important to recognize though that S-VHS saved at a higher "tvl" or vertical line resolution

(remember: vertical line resolution is actually horizontal dot resolution, the number of vertically stacked horizontal lines was still 576 or about 480 visible, set by the NTSC standard).

This meant the horizontal "capture" resolution should be increased beyond 720 to capture the extra horizontal line information. Advertised as "greater than" 400 "vertical lines" (TVLs) 400x3 = 1200 or new capture setting 1200x480

(but no Broadcast signal could reach 400 TVL lines of resolution.. only locally generated signals from computers, or certain Laser disc players - maybe special Cable boxes, [later BluRay players] could provide 400 TVL lines of resolutiion)

S-VHS-ET and Super Quasi Playback.. were [not] a new video standard.

Basically they let you [declare] a tape as S-VHS capable even if it was really intended only for VHS recordings. In that way you could buy cheaper tapes with the understanding that quality of the recording might vary between the brand and quality of the tape actually used. Though many many recorders picked up the label as a feature, it wasn't used that often. Quasi Playback was a feature on later VHS (only) recorders that allowed playing back S-VHS recorded as S-VHS format on a plain old VHS recorder in a slightly "fuzzy" image mode.. simply to get backwards playability under marginal circumstances.

HQ was a somewhat less successful attempt to encourage enhanced noise reduction to improved picture quality, but not really any change to the VHS and S-VHS signal format standards.

In the audio space, VHS started with a single monotrack audio track that was stored on the edge of one side of the tape like a cassette recorder. Rarely this was upgraded to a low quality stereo dual track that split the same mono track space into two tracks with half their normal resolution.. it was very uncommon. Later (HiFi) ability introduced additional video drum heads to lay down a special "deep" track for stereo [underneath] the video tracks. This was popular for a few reasons, not the least of which was near CD quality audio. And the freeing up of the original mono track so it could be used for "dubbing" alternative audio, like another language, or using it for custom time codes, or allowing wiping the audio and replacing it with a new audio track without editing the video.

HiFi had a slight problem however in that "switching" noise due to the "switch" from one audio head on the video drum (as it swung around like a merry-go-round) to the other, could sometimes be heard as a low "buzz".. Dolby equalization was used to bandpass limit or minimize the problem.

For various reasons, sometimes people would choose to "change" the Audio source selected when playing a tape back, and this became the [audio monitor] switch on many vcrs, Norm generally played HiFi, Linear/Mono the original "linear audio" track on the edge of the Tape, Mix would play both HiFi and Linear (and give a tunnel effect), and Left or Right would select one or the other stereo channel. For backwards compatibility a Linear mono track was normally always recorded at the same time a stereo track was being embedded "below" the video tracks in the center of the tape.

So while capturing video, the audio may be as low as 8 kHz or as high as 21 kHz and the sound card used to capture with some video capture gear should sample at 16kHz or up to 44 kHz to make sure all of the sounds dynamic range is captured. Even though less will often be more than sufficient.

Obviously for the Linear track, the slower the tape moves, the less bandwidth available for sound and thus its dynamic range will fall quite a bit.

Beginning with the HR-7000 series for JVC and with different models in the SR and BR lines, JVC introduced various features to improve signal conditioning before recording and during playback. Some reduced noise and boosted signal to noise ratio, others stablized the video signal to better conform with NTSC standards and still others sought to improve the seperation between luma and chroma information before recombining them to put them on tape, or when extracting them to send to optional S-Video jacks.

S-VHS is not S-Video

S-VHS was a format declaration regarding how a video signal was processed and stored on video tape.

S-video was an electrical and connector standard regarding the seperation of luma from chroma information.

Video signal normally has luma and chroma mixed together, which tends to blend or "smear" when transmitted over long distances or across poor quality cables. By keeping them seperate as much as possible this smearing effect does not happen and video quality remains higher.

Not all VHS players have S-video connectors, but it was relatively common in later years.

Selecting a VHS player, or VCR recorder is made more difficult once all of the options in later equipment begins to be understood. Its easy to focus on the worst case scenario and become paralyzed with fear that an uninformed choice will be made.

But new gear is no longer being made.

The gear made last and thus (newest to today) at the end of the production line is not necessarily the most appropriate and its been reported that cost saving measures on the final machines rendered them worse than slightly older machines. Again a contradictory if not unhelpful conclusion.

Add to this, the brands and the lines from competing companies. (aka Panasonic, JVC, and eventually Sony) varied considerably in quality and reliability.. and performed variably with tapes originally recorded on competitors equipment.

Its generally presumed that if you have a lower than 100 number of tapes a service burea specializing in VHS to DVD or computer PC transfer is the lowest cost option.. but these businesses are disappearing and the quality in their service is also variable.. word of mouth being the best option to find a good one.. or trials with test tapes on a personal basis.

Learning all about the lines of one of the big three previous makers is time consuming but is probably worth the time. Recommendations for Panasonic and JVC are easiest to come by.. generally the ProLine for Panasonic or Service (SR) line for JVC are good (if not expensive) places to start with.. and then choosing only models that have a built in "line TBC and some sort of video noise reduction system"


Is the feature of all VHS players to read the bottom edge of a VHS tape and extract a pulse which indicates the relative speed of the tape past its playback heads. If it is too slow then the machine will attempt to speed the tape motion up, too fast it will attempt to slow it down. It is this real-time "feedback" loop which is the first line of stablizing a playback picture. To that the TBC will attempt o regenerate the NTSC standard video signal and correct any errors it detects. Other features will attempt to improve color purity or reduce spurious noise in the signal. A tape can however stretch over time, distort or loose its tracking information entirely, in which case "good" playback on one brand machine may be unsuccessful, but may succeed on a different brand machine.

For this reason transfer professionals tend to own or have access to multiple brands and models and can try different machines to "try" to capture a clean transfer. This drives up cost and makes using a service burea all the more attractive.

Acquiring a VCR is only the beginning

VCRs are all old, even the last Funai or Sanyo came off the line in 2016. The machines have rubber belts, rubber tires and rubber pinch rollers which are designed to last about 1000 to 2000 hours or about 6 weeks of continuous use before needing cleaning or replacement. Those parts also age just sitting on a shelf, ozone and other air impurities, or debris from tapes when played back can attack the rubber and accelerate part aging. For large quantities of tape, its likely if not as soon as its purchased then soon therafter it will need professional service from someone who knows how to disassemble and inspect the brand and model you acquire.. those people are retiring or moved on from their last jobs.. they are becoming increasingly scare and expensive in dollars and time to find. Almost before buying a VCR.. you need to figure out who and where it will be serviced. And shipping cross country the VCR for repair, it can easily become damaged, stolen or lost.

But once you have a working VCR and its producing a clean signal

There is still the issue of signal stability and macrovision or copy right protection distortion.

Copyright protection was enforced by deliberately damaging part of the video signal, capture devices detect this damage and can judge even false video signal damage as legitimate copyright protection enforcement.. and refuse to capture the signal.. or at the very least.. even if captured.. the damaged signal may appear distorted or damaged.. usually as flickering from bright to low brightness.

The line TBC and noise reduction circuitry in a good VCR can clean up some problems, but macrovision copy protection was enforced at the frame level.. which all but the most expensive and rare VCRs did not perform. So often an external (or "in line") full frame TBC is needed to correct frame level video signal errors.. and although not intended.. this will also correct copy protection damage to the signal as well.

TBC - Time Base Correctors

TBCs come in several cost ranges and some brands are known to be better than others, universally however they tend to perform poorly if run for long times or allowed to overheat. Generally the lowest cost ones that are suitable for real use start at about $400 minimal but often cost much more, if they can be found.. they are also no longer being made.. like the VCR the need for NTSC video time base correctors is going away. Over the air digital video signals no longer have to conform to the old NTSC standards, and although there are exceptions.. the old video signal standards no longer apply, so the equipment is no longer needed. Another reason a service burea can be more attractive.

Video Processors

Video signal can loose luminance, or chroma information, or it can become skewed or desaturated. An external video signal processor can artificially be used to restore the base or "floor" for some signal problems, or reduce the "ceiling" in some cases. This can often be done post capture as well at greater CPU and time expense, but like photography, once detail is "blown out" it can never be recovered.. so sometimes if you have an external video proc to put "in line" it can be beneficial.

Video Capture

Video capture device choices usually depend on budget, but also available equipment and intended post processing (if any) and final destination. In the past people usually chose to capture to MPEG2 to save space and because a great deal of effort went into the DVD standard and people are familar with the output quality.. its a familar "known".  People worried however about "editability" and advanced editing "later". Simple commerical cuts and joins are relatively easy and can be accomplished with experience without re-encoding and suffering a "generation loss" that will reduce the final output even if it is MPEG2. Greater compression to H.264 or DiVx is becoming more familar and "possible" but there are great tradeoffs long term to storing archival footage in such a compressed form.

On the other hand, precious footage kept at full capture resolution and left uncompressed ( a wedding video for example ) while larger on a data DVD or hard drive is better kept stored that way, and compressed DVD "print" copies made for other people as needed.

A simple "raw" AVI capture at full resolution can generally be handled by any video capture card, or USB capture dongle.. however the capture software used usually needs to be compatible with Windows and DirectX so that a range of options in PC equipment and operating system versions can be selected. Video capture devices tend to be sensitive to overheating, and "dropped" frames. A "raw" AVI capture captures video seperate from the audio and combines them into the final file.. the capture software used to drive the capture hardware will have to "compensate" for any dropped frames, and note them so that they can be account for.. usually its best to choose and minimze the possiblity of dropped frames.. or the video and audio will drift out of sync.. compensation can be anything from [a] allow them to drift an let you fix it later [b] chop up sections of audio to keep them in sync [c] duplicate frames in the file to make up for the difference .. all of which have consequences.

A more complex "MPEG2" capture will capture at full resolution inside the video capture hardware and then compress down into streamed video databits intermingled with audio databits. The combined data stream will "lock" the video and audio together so the compensation is automatically and immediately taken into account. Since the hardware is dedicated to capture and compress dropped frames do not lead to drifting video.. they are "locked".. a capture device may have glitches, but never out of sync.

Mac's and DV - Digital Video

Apple Mac systems were and are known for video editing, but capture was never they're long term goal. Some capture hardware is available, but the cost and selection range are much more out of the control of the customer. DV - digital video was a popular camcorder and studio format (actually two formats with vast tradeoffs).. which left "video capture" entirely up to the video "filming" device.

That is the camcorder had a "Static" or fixed choice of video capture luma and chroma resolution and one digital file storage format on video tape, which stored a digital "file" instead of a audio and video signal. DV "capture" is a misnomer, by the time the DV tape is written in the camera, it is a digital file.

Instead DV "transfer" over firewire is merely the copying of a digital file between two computers. And "printing" or dubbing is merely the copying of a file from a NLE- Non Linear Editor, back to the digital storage tape in the camera.

In this way DV tapes are merely "data tapes" and normally "video capture" never happens at a desk, its done real-time in the field.

DV pass-thru can use the inputs and outputs on a camcorder to "pass-thru" a video signal through a camcorder in order to use its video signal capture circuitry to create a digital DV file and send that over firewire to another DV camcorder or NLE computer.

But its very important to realize that the hard decisions about to compress or not to compress, and which hardware or software to choose to use.. are already made. The encoder was chosen by the DV standard, and re-coding or "transcoding" in a PC or Mac in a NLE - non linear editor software will represent a "generation loss".. DV is finalized "cooked" video it can't be changed after the fact without loosing data. But for simple cuts and joins, re-encoding or transcoding isn't needed. If the limited dynamic range of the scenes "filmed" are acceptable (like for news reporting) its okay.. but not for things like "movies".

DV consumer also used a less detail saving compression choice to make it cheaper for people in the late 1990's when equipment was much slower, so more detail had to be sacrificed, than it needs to be today.

DV broadcast (or one of the variants) used tapes that stored fewer minutes but also used a less compressed encoder to retain more image quality.. it was only available to users of "DV broadcast" equipment and even today is not that great.. there are better choices now.

In fact people did commonly "dub" or (duplicate by dubbling) VHS to DV in combo recorders for a time, later to be replaced by VHS to DVD recorders.. and finally leading to off air recording to DVRs like TiVOs and personal PVRs like Windows Media center on the PC or EyeTV on the Mac using dedicated MPEG2/4 capture tuner/encoders like the Silicon Dust HomeRun devices.


The linux operating system evolved from the early 90's as little more than a boot loader to a full fledged graphical desktop by the year 2000. Its users and developers sought to "dub" or duplicate much of the evolving functionality of the most popular operating system(s) of the day.. mostly Microsoft Windows.. by reverse engineering the software and hardware created for sale to users of the most popular operating systems. Thus while inexpensive, and a great training ground for future programmers and daily users with well defined needs.. it was mostly a "build it yourself" and "self supported" operating system. The drivers of video graphics production and capture were usually financially motivated and protected their investments by patenting their software and hardware methods to prevent financial competition. Discovery through reverse engineering being the only way to construct a framework, occasionally companies going out of business would "gift" their technical knowledge, but this was rare.

In that light, Linux haltingly started and stopped many video graphics production and capture projects, ultimately the winner being the mostly kernel based driver interfaces called "Video for Linux, abbreviated v4l and later v4l2 (version two)". Plug-in drivers could be created to "expose" features for capture devices which software could then expect when searching for capture devices as that software started up. Several semi-commercial and freeware NLE systems emerged to support v4l2 but always lagged. -- at this point in history capture is pretty well supported for a small selection of video capture hardware, but since video capture hardware is no longer being developed.. that is its no longer creating new capture systems since the signal standard is on the decline.. the choices will probably remain the same or decline.. they won't increase. Having more choice among many video capture hardware choices is probably better, since some will have bugs and only be discovered later.

So while "possible" Linux is probably not the most robust, frustration free video capture platform that could be chosen, without being brought to using linux by the "choice" of the video capture "first" and told linux is the best operating system for using that hardware. Choosing the operating system before the hardware is rarely the best case for video capture hardware.

What replaced DV tapes

Like VHS the time for DV tape is also passing. Camcorders and "filming" equipment started slowly but has transitioned now to mostly "Progressive" recording of digital video using a myriad number of compressed or noncompressed encoding methods direct to data files, first on small portable hard drives, then custom solid state memory cards, and finally to plain old SSD.

Since the "Progressive" method has much more in common with actual celluloid "film" than older "Interlaced" video signal camcorders.. studios and broadcasters eagerly adopted the transition and in some cases no longer even use actual film as the budget proposition inevitably trades places as to which is more expensive.

While JVC and Panasonic and many others adopted the DVmini cassette, Sony invented the Hi8 format and there were a few other formats like VHS-C but the majority were DV centric and like VHS is what is mostly being transfered today. For the most part DV tapes are become scarce and hard to find except as New Old Stock (N.O.S.).

DVD to BluRay

PVRs and Streaming have mostly reduced the market and demand for personal storage, cell phone video is generally much smaller and uploaded to the cloud for long term storage. Or personal videos are kept on local hard drives and synchronized with long term backups. The storage format changes but is generally unimportant to the end user since transcoding on the fly has become common place in the background while displaying a file.

DVD as a long term format is being called into question, even using the M-Disc format, but even when burning a DVD-R it takes time, and the reliability and ability to scratch or damage a disc calls into question their viability. Also many DVD burner makers have left the market, Samsung, Sony even HLDT and LiteOn seem less likely to remain much longer. Blu Ray and Double Density Blu Ray as a movie storage format is strongly resisted and Blu Ray PVRs are highly restricted in the US from burning copies, or must comply with Burn Once rules.. meaning there are few Blu Ray PVRs and they must compete with Hard drive or Streaming storage formats in the cloud.

And all Optical burner and player media tend to have rubber belts which wear out over time, meaning optical drives unless recently manufactured will eventually become inoperable.. as demand goes down they are likely to increase in price.. so as a long term storage format, there are some serious questions to think about.


Installing FCP7 on a 2007 Mac Mini

Final Cut Pro 7 (part of Studio 3) can be installed on a 2007 Macmini2,1 Intel Core 2 Duo running Snow Leopard 10.6.8

Tip!  A very important thing to know

If you run xrdp (or vnc) to remotely access your Mac, be (very) aware that the VRam reported by the video card will be incorrect if you do not have a "real" monitor plugged into the Mac. Or presumeably an EDID "emulator" to convince OSX it has a real monitor attached. (Errors) that will prevent application startup will occur even if the directions below are followed and no monitor is currently plugged in. minsys.plist adjusts the Blocking test during install, but the app also checks the available Vram when the app starts.. if no monitor is plugged in, it will report Zero (0) available and stop the app and quit. Plug in a monitor and the apps will go ahead and start... you can even view it on xrdp or vnc remote connection.. as long as a monitor or emulator is plugged into its graphics port.

Normally the Installer runs an app called (Requirements Checker.app) which requires at least 128MB vram on the graphics card to install.

However this is controlled by a plist file called minsys.plist

The installer can be copied from the install DVD to a folder on the Mac Mini, then Finder can be used to navigate to the [Install Final Cut Studio] alias, right-click the alias and choose

[Show Original]

Then right click on [FinalCutStudio.mpkg] and choose [Show Package Contents]

Then right click on [Requirements Checker.app]
 - this is a package with an unusual text document icon, choose  

[Show Package Contents]

Near the bottom of the list is [minsys.plist]

Right click on [minsys.plist] and choose [Open with...] [Other..] choose [TextEdit.app] Open

Search near the bottom for "AELMinimumRAM"

Search for <string>128</string>

Change 128 to 64

TextEdit > File >  Save
TextEdit > File > Close

Return to the top of the Folder containing the copied DVD Installer software and double click  [Install Final Cut Studio]

The install should now proceed as normal, the 2007 Mac Mini2,1 Intel Core 2 Duo will pass the requirements checker app.

[After install.. which may take several hours]

Upon first start up, it will inform you the vram of the system is not 128 and quit.

Go to the Applications directory and  find the [Final Cut Pro.app] package and right click then choose [Show Package Contents]

Then open [Resources] and search for [minsys.plist]

Perform the same TextEdit.app procedure to modify the [AELMinimumVRAM] key


And change it to


Save and close the file

Final Cut Pro should now start and query for the DV deck type you regularly use, accepting it but not having one connected will produce an error, but offer the choice to continue and completes the setup of the program. The NLE will then open.

You may need to the same procedure for other Final Cut Studio apps, but not all on an individual app basis, open the minsys.plist and set the value to 64 save and close. Then the app should open. Performance however cannot be expected to be up to the standards of supported video hardware.

[After the Final Cut Pro editor starts]

You can open [Final Cut Pro (menu)] > [Audio/Video Settings...] search for the [Video Playback:] selection and change it from the default to [None] to prevent a redetection failure for the DV deck type on each startup of the FCP app.

The requirements are set to optimize the user experience, not all functions and add-ons may operate as expected and this is not a supported method of install.

Additional DVD media and a legitimate Installation Serial Number will be required.

None of this will circumvent the need for a legitimate license for the product. I believe official product support for this product has now ended, but in any event, performing this procedure to install and use the product on unsupported hardware will not be supported by the manufacturer.


vdrvroot.sys fails to boot 0xc000000f

Came across a Windows 7 x64 laptop that would fail to boot, the error message was rather obscure and didn't help much.

I had a Corsair USB SSD drive with a copy of Macrium and its Microsoft DART "like" WinPE on it.

Used that to backup the hard disk contents

Then used the [Fix my Computer] option the backup program provides.

I didn't expect much.

It offered to rebuild the BCD and helpfully (prompted) for [which] volume to boot from.

Very much not like using the BCDedit program.

The defauft for an odd reason was pointed at the Recovery partition.

I unchecked that volume and checked the C:\ (or systremroot) volume.

Then let it continue.

I walked away and came back to a fully booted and waiting for password screen to start up the desktop.

A bit (shock and awe) that it was that simple.

The error message:

vdrvroot.sys fails to boot 0xc000000f

apparently is the "Virtual Device Driver for Root file systems" and the obscure BSOD Stop code 0xc000000f would appear to be pointing out that the bootable Volume pointed to by the BCD is wrong.


HP SMH data source is missing, blank

For ProLiant DL380p Gen8 if you use the Hewlett Packard SPP method for installing agents and the System Management Homepage comes up with nothing for the components on the homepage. And the data source is blank, missing or not set and you go to the Settings page and cannot find a list of sources.

Basically you are missing:

# yum install hp-ams hp-smh-templates

One restores the Settings option for selecting a source the other completes the sources.

After yum installing them from the SPP repository, you (do) need to restart the hp-snmp-agents init.d script to provide the data.

And don't forget to set the snmpd.conf readonly community strings to something the hpsmh can access.

Key thing to know

When installing the agents from SPP from now on, they broke the packages up into still more packages, if they all aren’t installed, then SMH will have no data source and be blank.

From this:
yum --disablerepo="*" --enablerepo="spp" install hp-snmp-agents hpssa hponcfg

To this:
yum --disablerepo="*" --enablerepo="spp" install hp-snmp-agents hpssa hponcfg hp-ams hp-health hp-smh-templates hpssacli net-snmp net-snmp-utils

The following are also good cli checks

[root@host ~]# snmpwalk -c public -v 2c  localhost

SNMPv2-SMI::enterprises. = STRING: "ProLiant DL380p Gen8"

[root@host ~]# hpssacli ctrl all show status

Smart Array P420i in Slot 0 (Embedded)
   Controller Status: OK
   Cache Status: OK
   Battery/Capacitor Status: OK

[root@host ~]# hpssacli ctrl all show

Smart Array P420i in Slot 0 (Embedded)    (sn: 009999999999999)

[root@texasvmhost ~]# hpssacli ctrl slot=0 pd all show

Smart Array P420i in Slot 0 (Embedded)

   array A

      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS, 0 MB, Failed)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS, 1200.2 GB, OK)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS, 1200.2 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS, 1200.2 GB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS, 1200.2 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS, 1200.2 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS, 1200.2 GB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS, 1200.2 GB, OK, active spare for 1I:2:1)