I was playing with ChatGPT and discovered maybe some little known facts about Windows 7.
Capturing to native DV format produces a high quality file losslessly with slightly degraded color sampling, 4:1:1
However since the NTSC format is recorded to tape using a slightly degraded color sample by way of using Color Under sampling.. assigning less bandwidth to the color component of the signal UV than to Y, the conversion from a lessor color signal to 4:1:1 digital sampling is essentially the same thing. And nothing is lost, there simply isn't any more signal bandwidth available to carry more information that that collected by 4:1:1 digital sampling.
DV video does a noise reduction step passing a DCT - Discrete Cosine Transform over the signal, but does not 'compress' or reduce the actual number of digital samples in the Y or UV axes, this results in a large digital sample file, but should retain as much of the original signal as possible for playback.
DV does not in anyway attempt to up sample or 'invent' more information than is actually present in a signal, nor does it attempt to convert from field to frame (otherwise known as interlaced to progressive conversion).
Other compression schemes like MPEG-2 (h.262) or h.264 treat color samples somewhat differently since they were invented for film to digital conversion and the available signals when converting from film to digital video were captured using full frame cameras in a progressive to progressive conversion format. the full frame digital cameras had 4:4:4 available to them and MPEG2 or h.264 could take advantage of the higher available color sampling and down sample the color space to 4:2:0 for retaining slightly more available color signal than a Broadcast or VHS tape Color Under sample which never had anywhere near the same original bandwidth due to the NTSC remote transmission standard.
When converting from VHS tape to digital files became popular, using the latest technology was considered more attractive and owning one device for capturing as opposed to many limited the popularity of a basic DV converter.
Windows went from mostly using DV capture tools like Windows Move Maker, to Adobe Premiere Essentials and Pro, Sony Vegas, Canopus DV or Grass Valley EDIUS in the 98 thru 2000 and XP editions.
Windows 7 followed up with tools such as Windows Live Essentials installed separately which could capture in DV format, and then quickly "convert" the Lossless DV file format into Windows WMV (VC-1 .. basically Microsoft answer to h.262 or MPEG-2.. or Windows MP4 h.264 AVC) formats.
The key was not in the Save Project menus, but the [Save Movie] menu and choosing the delivery type and format. Save as DVD Burn would convert to VC-1 and automatically transfer to Windows DVD Maker to guide thru setting up a Menu structure for a Disc and then burning to a DVD disc. Choosing a delivery type of [For Computer] granted the option of giving the file to be saved a specific file name and file system destination, and then choosing a file type [MPEG-4/H.264 Video File (or) Windows Media Video File - WMV/VC1]
These tools while they could be used with a DV bridge were more targeted for use with DV technology for offloading or uploading DV Camcorder videos and then converting them to DVD or File consumer types.
They soon upgraded to working with MPEG-2 hardware capture devices since those were more popular for capturing Broadcast or Cable video files where commercials were routinely edited out and the delivery types were stored on low capacity storage devices like hard drives or limited DVD media until consumed.
The MPEG-2 hardware capture types were of lower absolute bandwidth than DV firewire, and could finally be handled not only over PCI 2.x and USB bus types eliminating the need for high royalty hardware that required firewire .. but in theory could capture slightly better color signal if the transmission type were over a medium that were totally digital, as with ATSC instead of NTSC.
Some of the details got lost in the translation and a semi-cult status of video capture in 4:4:4 uncompressed formats erupted over the intervening years, as non hardware encoding field and frame grabbers became cheap and widely available. They did not add anything to the quality of the signal capture and in fact complicated many problematic signal captures where previous hardware encoders had special time base and frame synchronizing abilities that simple frame grabbers did not.
4:4:4 was and remains the capture domain of film and studio on site rather than something easily mastered at home.
h.264 and later compression techniques were developed initially for the purpose of enabling mobile phone and internet video delivery; but later profiles enabled matching and exceeding the delivery quality of MPEG2 and have all but replaced them.
The initial age old question of DV versus MPEG-2 or h.264 capture options side steps or 'hides' the question of archiving in 'Interlaced or field based format' versus archiving in 'Progressive or frame based format'. DV from the outset was 'field based' and is based around SD or Standard Definition 720x480 video.
Later versions of DV such as HDV utilized a form of MPEG-2 compression without the interfield/interframe temporal distortion inherent when converting a field based video to a frame based one, and the unavoidable artifacts that result from that step. HDV however was never popular and very short lived.
The arrival of TiVo and DVD recorders and their popularity, and the arrival of EyeTV and Windows Media Center eventually led to the broad assumption that MPEG-2 and DVD formats were superior to DV or HDV for all purposes. After 2009 and the conversion from NTSC to ATSC formats most TV transmissions became Progressive with ever enlarging frame sizes much higher than 720p .. leading to further interest in high compression ratio codecs like h.264 and presently h.265
The conversion step from DV to MPEG-2 was not without forethought, color samples are 'co-sited' withing a field and frame so that minimal color aberrations are possible. Also MPEG-2 formats allow for profiles that are 'interlaced' as well as 'progressive' .. and later h.264 formats also allow for field based video storage formats.. which are rarely used or supported in normal hardware playback equipment.. leading back to the observation that only a sophisticated and complete software playback system can hope to cope with archaic full resolution and un-temporaly distorted playbacks of old content that are artifact free when the content has been compressed, especially if it has been converted to framed based 'Progressive' formats with no way to undo the irrevocable choice made in the era in which it was 'mashed together'.
This all leads to me to personally conclude, for VHS or possible Betamax video capture to an archival format; DV is probably the best format. MPEG-2 and h.264/h.265 formats are smaller and are still serviceable delivery formats.. and because of their size and rapid portability may make survival of 'a copy' even if not the best quality available more likely.
This has to be taken as a recommendation for SD content only, since DV did not support larger frame sizes. And all content after NTSC to ATSC would already be in Progressive format with a larger color sample budget.
Dealing with DV bridges inevitably leads to questions of time base correction, frame synchronization and how to use a firewire device in Windows 7 or later operating system.
Time base correction and frame synchronization is a prerequisite to submission to a hardware codec device, even a DVD recorder.. so effectively a DVD recorder with pass-thru processing is a combo tbc+frame synchronizer. Timebase correction is mostly about detecting a VHS versus Broadcast signal to compensate for the noisy head switching at the beginning and ending of a field of video to prevent 'flag waving or tearing' at the top or bottom of the screen. These are often hidden by bezels on CRT monitors as they are in the overscan region of the picture but become visible when watching the video in a video preview or playback on a digital monitor.
Swimming or variable length drift between horizontal sweep lines must also be corrected by devices capable of detecting this and reformatting the time of the lines relative to each other through a process known as "Velocity" compensation to square up and make uniform the stack of horizontal lines from top to bottom in a field and frame.
Frame synchronization is more about coping with a damaged or incomplete field or frame of video and automatic recovery such that it always delivers a 'standard' frame rate. Whether it does that by repeating a frame to make up for the unusable remnants, briefly inserts a black, blue or blank frame into the stream, and whether it attempts to simultaneously throw out audio samples to keep audio and video in lock step.. are all design decisions or settings options unique to the specific frame sync being used.
DV bridges could react to a loss of signal, by aborting or end a capture, ending and staring a new one as soon as a signal becomes available again, or pausing a capture and requiring manual intervention by a human being. Some would even make a time code note so these could be revisited and addressed later.
Firewire DVRs were not common in the United States, LG and Samsung provided a couple examples, but by then TiVo, EyeTV and Windows Media Center had claimed the larger share of the public business. Formac (UK), I-O Data (Japan), and EyeTV(mostly EU countries) did dabble in Firewire Tuners relegating the DVR piece to software on a connected PC but for the most part people were largely not aware of their existence.