I've been busy exploring (or re-exploring) the process of converting VHS tapes to PC files before the equipment is totally gone, or the tapes disintegrate. JVC and Funai have stopped producing VHS players and eBay is starting to run out of even used VHS machines.
I started with a simple survey of the methods and tried to pick a simple path of USB dongle to simple capture software, like Virtual Dub to computer file. But two things occurred.
1. I didn't realize how important a good VHS player was and the initial results were bad
2. I didn't know nearly enough about VHS video signals to make reasonable decisions about equipment or software
So I turned to some online forums like VideoHelp, AVSForum and DigitalFAQ
Time and again I reached a point of decision, only to collapse when I posted a summary of my efforts and learned there was still much to learn.
So a quick knee jerk decision to pick up the project turned into months of reading and correlating and embarassment online from people who seem to know better than me, because they were retired and had been in the broadcast business for many years.
Meanwhile a clock is ticking.. not only on the tapes, as they are getting older, but on the hardware and its availability.
VHS playback is a very complex thing.
To understand a good VHS player versus a marginal or bad one, you have to start with an ideal assumption of the source tape. Then imagine all the things that could go wrong, and could be handled by the choice of VHS player, or anything you insert between the player and your capture device.
You don't normally have access to a perfect tape, or perfect capture device.. and what could go wrong can only be speculated about, or told to you by more experienced people.
So starting with the near worst case, a broadcast over the air signal captured by a TV tuner, and then put on a tape.
I learned the VHS system was invented by JVC (Victor Company of Japan) and the first VHS recorder/player was the HR-3300 released in the mid 1970's.
Three major companies in Japan were working on the Home market video player. Sony, JVC and Matsushita (aka Panasonic). Sony wanted to use the "C" method of tape lacing around a helical drum and faster tape which only recorded one hour of video. Sony offered their system to JVC and Panasonic who turned it down in favor of "M" tape lacing and slower tape to fit two hours on a tape, thinking Home users would prefer to save "Movies" on the slightly larger and expensive tapes. Most of that was academic and they both turned out to be right in different market segments.. but somewhat like Compaq vs IBM years later.. the lower cost option to the consumer and greater "choice" or "confusion" led to VHS being the most popular.
So VHS stands for "Video Home System" and JVC marketed three version of their recorders:
HR - Home Recorders
SR - Service Recorders
BR - Broadcast Recorders
There were others targeted for particular industries, but these were/are the most accessible to people today.. but are becoming scarce.
Video signal is a strange kludge of "encoding" and "compression" through hardware circuitry rather than digital processing. Somewhat like the typewriter, it had to slow things down.. because the equipment of the day was much slower than today.. so it had very limited bandwidth in which to transmit even a luminance signal... bright and dark spots on a screen.
A Television signal is basically two overlapping pictures called fields, these are transmitted one after the other.. they are taken at two different times, so there is a slight "gap" inbeween them in which motion is missed, and a difference can arise between the two pictures if displayed at the same time.
A normal Television is designed to never show both pictures at the same time. The human eyes persistence of vision while one picture is "fading" and the next one is being brought into existence and shown, leads to a phenomena where the brain "interpolates" or "automatically fills in the visual gaps".. because two pictures are being shown, but not at the same time, a full frame with all of the vertical resolution is called [Interlaced].. literally time space "woven" from (odd) and (even) lines from either picture.
So two fields make up a frame of video, and then the next frame is constructed by showing two more "Interlaced" fields.. doing this means the signal for a full field only needs one-half the bandwidth that would be needed if both pictures were interwoven together and transmitted at the same time.
It also means that although to show one frame takes 30 frames per second, it appears as if the motion is occuring at 60 frames per second.. motion resolution is preserved, even though frame rate is not.
This is important to know, because a computer screen, and modern LCD TV's display in what is called "Progressive" mode.. or one frame at a time, no fields.. if that happens the difference between the two [woven] fields into one frame become noticeable by the human eye.. a Progressive display renders the video "de-interlaced" which is bad most of the time.
A person will notice it [more] when there is faster action, or more "difference" between the two field pictures that get woven into the one "progressive" frame.. these look like Herringbone or "Mice teeth" or zig-zag "lines" around moving things in a video scene.
There are different ways of "compensating" during capture or after capture and during conversion to a compressed file format to make an Interlaced video look "better" when it is displayed on Progressive display.. but these methods keep changing from year to year.. and the previous methods are generally regarded as "poor" compared to new methods rapidly.
Its now considered [bad] to even attempt to "squash" or "de-interlace" video that started out "interlaced" when it is captured. Better to store it interlaced and let the software displaying it in the future use modern methods.. or if the display device is capable of display interlaced video as interlaced video.. give it that opportunity. -- the old reasons for "de-interlacing" when capturing or shortly after.. have gone away.. de-interlacing "on the fly" during playback was once considered slow and CPU intense.. and of poor quality on low powered devices like cell phones.. most have specialize hardware for doing that now, and CPU power has increased exponentially.. so it is no longer an issue.
It's also important to know that capturing "Interlaced" video as "Interlaced" depends on certain "minimal" vertical resolution in the capture device. You can often configure a capture device to capture at 320x240 or 640x480 and so forth. The second number is how many vertically "stacked" horizontal lines to capture. For VHS signal the vertical resolution is set by the NTSC signal standard at 576 or about 480 (it varies because of the vertical blanking interval above and below the scanlines which may be hidden or not shown in a TV with a bezel), capturing at anything less than that and the resulting video file will not have enough information to recreate an "Interlaced" field effect which an "Interlaced" playback device can use. It would effectively have to "mush" it altogether and treat it as Progressive, with all the attendant problems that would cause.
Video signal also has a very confusing standard for measuring [horizontal "dot" resolution].
Called TVL - "television vertical lines" it sounds in english much like a reference to "the vertical axis" in a typical mathematical class X-Y coordinate system.. but it deliberately does not mean what it sounds like it means.
Rather TVL "vertical lines" means the answer to a question for a video signal that is observed on a monitor or display device. ["That is"] it answers the Question: How many Vertical Lines can you see horizontally in a video image, along a horizontal distance equal to the vertical height of a video image. -- It is [Not] "count" the total number of dots you can see across the entire "width" of a horizontal video line... but (only) the total number of dots representing the "tip top" of vertical lines runing from the top scanline to the bottom scanline across a horizontal distance defined by the vertical height of the video.. the vertical height of the video is a known quantity, its a fixed number of stacked scanlines, it does not vary even if you can't see them.
I think I know why they picked this seemingly "weird" definition, but it doesn't help new people to understand it... that have no prior experience.
First the horizontal line is not always the same length on every monitor. A display device in the years when monitors and TVs were used had a bezel.. and that "hidden" portion of the vertical stacked horizontal line was [variable]. So they "intentionally" defined the "test" or "Question" to end somewhere within the center of the line, and not necessarily starting from the Left side, like on a number line.. because the Left side could be hidden under a bezel as well. -- so much for the history lesson.
But worse, after they got this answer, they intentionally continued to refer to this answer as the "vertical resolution" because the test involved counting striped lines that ran "vertically" even though they were being used to measure "horizontal dot" resolution.
In the digital world of progressive displays, things "line up" more like the X-Y number line from mathematics, a 640x480 image is 640 pixel across, and 480 pixels up or down.
In the video world of interlaced displays, things get called weird things. First its assumed you know the frame height is the [sum] of two fields, totaling up to about 576 lines stacked one on top of the other vertically, but some of that will be lost to top and bottom "bezel" or "blanking interval". But then they refer to the "vertical resolution" as "lines", which are lines.. counted "horizontally" to give you horizontal "dot" resolution.
The end result for a VHS video image, is there are about 480 vertically stacked horizontal lines, and about 240 horizontally aligned (like dominoes) dots on each single vertically stacked line.
To put it another way, in the digital world perspective, the resolution is about 240x480 for a VHS video signal. (For a Black & White picture)
Color images use the same bandwidth, but a subcarrier to bring [chroma] information along to decorate that same line with color information, in televisions or displays that know how to use the extra information. So if you want to capture the color information along with the b&w [luma] information, you have to take more samples from the same line.
Televisions displayed color by firing signal at triple dots on each horizontal line, red-green-blue, in various arrangements, linear, circular cluster, ect.. but to be effective from a distance they had to appear as [one] colored dot. To a modern digital display that larger dot appears as [one] color.
But in order to capture all that information from a line of dots that may or may not be colored, the capture device must "sample" the line [three times] as much. So even though the horizontal dot resolution is 240 [luminal], to capture in color, you must sample at 240 x 3 = 720 [luma+chroma] for an overall capture resolution of around 720x480 to get the entire frame.
A "poorer" resolution signal, like with the limited bandwidth on a VHS tape would be limited to less than 240 TVL (vertical tick lines crossing the horizontal axis) leading to something like 200 x 480 insterad of 240 x 480 for a broadcast signal. So to capture a VHS signal, even accounting for color information (200 x 3 = 600) x 480 which means a 640 x 480 capture setting for a VHS signal digitizer is usually (more than) enough sample resolution to capture (all the signal that there is) coming from a VHS player.
S-VHS was a different video standard which some people could afford, if they had better tape quality, a better signal source, and a VCR that could record in S-VHS to an S-VHS tape.. not a common occurence, but S-VHS VCRs eventually became the norm. And when forced to record broadcast as S-VHS on S-VHS tape, it did look slightly better. More often though people didn't have S-VHS recordings even when they did buy S-VHS tape because they never bothered to force it to record in S-VHS mode.
It is important to recognize though that S-VHS saved at a higher "tvl" or vertical line resolution
(remember: vertical line resolution is actually horizontal dot resolution, the number of vertically stacked horizontal lines was still 576 or about 480 visible, set by the NTSC standard).
This meant the horizontal "capture" resolution should be increased beyond 720 to capture the extra horizontal line information. Advertised as "greater than" 400 "vertical lines" (TVLs) 400x3 = 1200 or new capture setting 1200x480
(but no Broadcast signal could reach 400 TVL lines of resolution.. only locally generated signals from computers, or certain Laser disc players - maybe special Cable boxes, [later BluRay players] could provide 400 TVL lines of resolutiion)
S-VHS-ET and Super Quasi Playback.. were [not] a new video standard.
Basically they let you [declare] a tape as S-VHS capable even if it was really intended only for VHS recordings. In that way you could buy cheaper tapes with the understanding that quality of the recording might vary between the brand and quality of the tape actually used. Though many many recorders picked up the label as a feature, it wasn't used that often. Quasi Playback was a feature on later VHS (only) recorders that allowed playing back S-VHS recorded as S-VHS format on a plain old VHS recorder in a slightly "fuzzy" image mode.. simply to get backwards playability under marginal circumstances.
HQ was a somewhat less successful attempt to encourage enhanced noise reduction to improved picture quality, but not really any change to the VHS and S-VHS signal format standards.
In the audio space, VHS started with a single monotrack audio track that was stored on the edge of one side of the tape like a cassette recorder. Rarely this was upgraded to a low quality stereo dual track that split the same mono track space into two tracks with half their normal resolution.. it was very uncommon. Later (HiFi) ability introduced additional video drum heads to lay down a special "deep" track for stereo [underneath] the video tracks. This was popular for a few reasons, not the least of which was near CD quality audio. And the freeing up of the original mono track so it could be used for "dubbing" alternative audio, like another language, or using it for custom time codes, or allowing wiping the audio and replacing it with a new audio track without editing the video.
HiFi had a slight problem however in that "switching" noise due to the "switch" from one audio head on the video drum (as it swung around like a merry-go-round) to the other, could sometimes be heard as a low "buzz".. Dolby equalization was used to bandpass limit or minimize the problem.
For various reasons, sometimes people would choose to "change" the Audio source selected when playing a tape back, and this became the [audio monitor] switch on many vcrs, Norm generally played HiFi, Linear/Mono the original "linear audio" track on the edge of the Tape, Mix would play both HiFi and Linear (and give a tunnel effect), and Left or Right would select one or the other stereo channel. For backwards compatibility a Linear mono track was normally always recorded at the same time a stereo track was being embedded "below" the video tracks in the center of the tape.
So while capturing video, the audio may be as low as 8 kHz or as high as 21 kHz and the sound card used to capture with some video capture gear should sample at 16kHz or up to 44 kHz to make sure all of the sounds dynamic range is captured. Even though less will often be more than sufficient.
Obviously for the Linear track, the slower the tape moves, the less bandwidth available for sound and thus its dynamic range will fall quite a bit.
Beginning with the HR-7000 series for JVC and with different models in the SR and BR lines, JVC introduced various features to improve signal conditioning before recording and during playback. Some reduced noise and boosted signal to noise ratio, others stablized the video signal to better conform with NTSC standards and still others sought to improve the seperation between luma and chroma information before recombining them to put them on tape, or when extracting them to send to optional S-Video jacks.
S-VHS is not S-Video
S-VHS was a format declaration regarding how a video signal was processed and stored on video tape.
S-video was an electrical and connector standard regarding the seperation of luma from chroma information.
Video signal normally has luma and chroma mixed together, which tends to blend or "smear" when transmitted over long distances or across poor quality cables. By keeping them seperate as much as possible this smearing effect does not happen and video quality remains higher.
Not all VHS players have S-video connectors, but it was relatively common in later years.
Selecting a VHS player, or VCR recorder is made more difficult once all of the options in later equipment begins to be understood. Its easy to focus on the worst case scenario and become paralyzed with fear that an uninformed choice will be made.
But new gear is no longer being made.
The gear made last and thus (newest to today) at the end of the production line is not necessarily the most appropriate and its been reported that cost saving measures on the final machines rendered them worse than slightly older machines. Again a contradictory if not unhelpful conclusion.
Add to this, the brands and the lines from competing companies. (aka Panasonic, JVC, and eventually Sony) varied considerably in quality and reliability.. and performed variably with tapes originally recorded on competitors equipment.
Its generally presumed that if you have a lower than 100 number of tapes a service burea specializing in VHS to DVD or computer PC transfer is the lowest cost option.. but these businesses are disappearing and the quality in their service is also variable.. word of mouth being the best option to find a good one.. or trials with test tapes on a personal basis.
Learning all about the lines of one of the big three previous makers is time consuming but is probably worth the time. Recommendations for Panasonic and JVC are easiest to come by.. generally the ProLine for Panasonic or Service (SR) line for JVC are good (if not expensive) places to start with.. and then choosing only models that have a built in "line TBC and some sort of video noise reduction system"
Is the feature of all VHS players to read the bottom edge of a VHS tape and extract a pulse which indicates the relative speed of the tape past its playback heads. If it is too slow then the machine will attempt to speed the tape motion up, too fast it will attempt to slow it down. It is this real-time "feedback" loop which is the first line of stablizing a playback picture. To that the TBC will attempt o regenerate the NTSC standard video signal and correct any errors it detects. Other features will attempt to improve color purity or reduce spurious noise in the signal. A tape can however stretch over time, distort or loose its tracking information entirely, in which case "good" playback on one brand machine may be unsuccessful, but may succeed on a different brand machine.
For this reason transfer professionals tend to own or have access to multiple brands and models and can try different machines to "try" to capture a clean transfer. This drives up cost and makes using a service burea all the more attractive.
Acquiring a VCR is only the beginning
VCRs are all old, even the last Funai or Sanyo came off the line in 2016. The machines have rubber belts, rubber tires and rubber pinch rollers which are designed to last about 1000 to 2000 hours or about 6 weeks of continuous use before needing cleaning or replacement. Those parts also age just sitting on a shelf, ozone and other air impurities, or debris from tapes when played back can attack the rubber and accelerate part aging. For large quantities of tape, its likely if not as soon as its purchased then soon therafter it will need professional service from someone who knows how to disassemble and inspect the brand and model you acquire.. those people are retiring or moved on from their last jobs.. they are becoming increasingly scare and expensive in dollars and time to find. Almost before buying a VCR.. you need to figure out who and where it will be serviced. And shipping cross country the VCR for repair, it can easily become damaged, stolen or lost.
But once you have a working VCR and its producing a clean signal
There is still the issue of signal stability and macrovision or copy right protection distortion.
Copyright protection was enforced by deliberately damaging part of the video signal, capture devices detect this damage and can judge even false video signal damage as legitimate copyright protection enforcement.. and refuse to capture the signal.. or at the very least.. even if captured.. the damaged signal may appear distorted or damaged.. usually as flickering from bright to low brightness.
The line TBC and noise reduction circuitry in a good VCR can clean up some problems, but macrovision copy protection was enforced at the frame level.. which all but the most expensive and rare VCRs did not perform. So often an external (or "in line") full frame TBC is needed to correct frame level video signal errors.. and although not intended.. this will also correct copy protection damage to the signal as well.
TBC - Time Base Correctors
TBCs come in several cost ranges and some brands are known to be better than others, universally however they tend to perform poorly if run for long times or allowed to overheat. Generally the lowest cost ones that are suitable for real use start at about $400 minimal but often cost much more, if they can be found.. they are also no longer being made.. like the VCR the need for NTSC video time base correctors is going away. Over the air digital video signals no longer have to conform to the old NTSC standards, and although there are exceptions.. the old video signal standards no longer apply, so the equipment is no longer needed. Another reason a service burea can be more attractive.
Video signal can loose luminance, or chroma information, or it can become skewed or desaturated. An external video signal processor can artificially be used to restore the base or "floor" for some signal problems, or reduce the "ceiling" in some cases. This can often be done post capture as well at greater CPU and time expense, but like photography, once detail is "blown out" it can never be recovered.. so sometimes if you have an external video proc to put "in line" it can be beneficial.
Video capture device choices usually depend on budget, but also available equipment and intended post processing (if any) and final destination. In the past people usually chose to capture to MPEG2 to save space and because a great deal of effort went into the DVD standard and people are familar with the output quality.. its a familar "known". People worried however about "editability" and advanced editing "later". Simple commerical cuts and joins are relatively easy and can be accomplished with experience without re-encoding and suffering a "generation loss" that will reduce the final output even if it is MPEG2. Greater compression to H.264 or DiVx is becoming more familar and "possible" but there are great tradeoffs long term to storing archival footage in such a compressed form.
On the other hand, precious footage kept at full capture resolution and left uncompressed ( a wedding video for example ) while larger on a data DVD or hard drive is better kept stored that way, and compressed DVD "print" copies made for other people as needed.
A simple "raw" AVI capture at full resolution can generally be handled by any video capture card, or USB capture dongle.. however the capture software used usually needs to be compatible with Windows and DirectX so that a range of options in PC equipment and operating system versions can be selected. Video capture devices tend to be sensitive to overheating, and "dropped" frames. A "raw" AVI capture captures video seperate from the audio and combines them into the final file.. the capture software used to drive the capture hardware will have to "compensate" for any dropped frames, and note them so that they can be account for.. usually its best to choose and minimze the possiblity of dropped frames.. or the video and audio will drift out of sync.. compensation can be anything from [a] allow them to drift an let you fix it later [b] chop up sections of audio to keep them in sync [c] duplicate frames in the file to make up for the difference .. all of which have consequences.
A more complex "MPEG2" capture will capture at full resolution inside the video capture hardware and then compress down into streamed video databits intermingled with audio databits. The combined data stream will "lock" the video and audio together so the compensation is automatically and immediately taken into account. Since the hardware is dedicated to capture and compress dropped frames do not lead to drifting video.. they are "locked".. a capture device may have glitches, but never out of sync.
Mac's and DV - Digital Video
Apple Mac systems were and are known for video editing, but capture was never they're long term goal. Some capture hardware is available, but the cost and selection range are much more out of the control of the customer. DV - digital video was a popular camcorder and studio format (actually two formats with vast tradeoffs).. which left "video capture" entirely up to the video "filming" device.
That is the camcorder had a "Static" or fixed choice of video capture luma and chroma resolution and one digital file storage format on video tape, which stored a digital "file" instead of a audio and video signal. DV "capture" is a misnomer, by the time the DV tape is written in the camera, it is a digital file.
Instead DV "transfer" over firewire is merely the copying of a digital file between two computers. And "printing" or dubbing is merely the copying of a file from a NLE- Non Linear Editor, back to the digital storage tape in the camera.
In this way DV tapes are merely "data tapes" and normally "video capture" never happens at a desk, its done real-time in the field.
DV pass-thru can use the inputs and outputs on a camcorder to "pass-thru" a video signal through a camcorder in order to use its video signal capture circuitry to create a digital DV file and send that over firewire to another DV camcorder or NLE computer.
But its very important to realize that the hard decisions about to compress or not to compress, and which hardware or software to choose to use.. are already made. The encoder was chosen by the DV standard, and re-coding or "transcoding" in a PC or Mac in a NLE - non linear editor software will represent a "generation loss".. DV is finalized "cooked" video it can't be changed after the fact without loosing data. But for simple cuts and joins, re-encoding or transcoding isn't needed. If the limited dynamic range of the scenes "filmed" are acceptable (like for news reporting) its okay.. but not for things like "movies".
DV consumer also used a less detail saving compression choice to make it cheaper for people in the late 1990's when equipment was much slower, so more detail had to be sacrificed, than it needs to be today.
DV broadcast (or one of the variants) used tapes that stored fewer minutes but also used a less compressed encoder to retain more image quality.. it was only available to users of "DV broadcast" equipment and even today is not that great.. there are better choices now.
In fact people did commonly "dub" or (duplicate by dubbling) VHS to DV in combo recorders for a time, later to be replaced by VHS to DVD recorders.. and finally leading to off air recording to DVRs like TiVOs and personal PVRs like Windows Media center on the PC or EyeTV on the Mac using dedicated MPEG2/4 capture tuner/encoders like the Silicon Dust HomeRun devices.
The linux operating system evolved from the early 90's as little more than a boot loader to a full fledged graphical desktop by the year 2000. Its users and developers sought to "dub" or duplicate much of the evolving functionality of the most popular operating system(s) of the day.. mostly Microsoft Windows.. by reverse engineering the software and hardware created for sale to users of the most popular operating systems. Thus while inexpensive, and a great training ground for future programmers and daily users with well defined needs.. it was mostly a "build it yourself" and "self supported" operating system. The drivers of video graphics production and capture were usually financially motivated and protected their investments by patenting their software and hardware methods to prevent financial competition. Discovery through reverse engineering being the only way to construct a framework, occasionally companies going out of business would "gift" their technical knowledge, but this was rare.
In that light, Linux haltingly started and stopped many video graphics production and capture projects, ultimately the winner being the mostly kernel based driver interfaces called "Video for Linux, abbreviated v4l and later v4l2 (version two)". Plug-in drivers could be created to "expose" features for capture devices which software could then expect when searching for capture devices as that software started up. Several semi-commercial and freeware NLE systems emerged to support v4l2 but always lagged. -- at this point in history capture is pretty well supported for a small selection of video capture hardware, but since video capture hardware is no longer being developed.. that is its no longer creating new capture systems since the signal standard is on the decline.. the choices will probably remain the same or decline.. they won't increase. Having more choice among many video capture hardware choices is probably better, since some will have bugs and only be discovered later.
So while "possible" Linux is probably not the most robust, frustration free video capture platform that could be chosen, without being brought to using linux by the "choice" of the video capture "first" and told linux is the best operating system for using that hardware. Choosing the operating system before the hardware is rarely the best case for video capture hardware.
What replaced DV tapes
Like VHS the time for DV tape is also passing. Camcorders and "filming" equipment started slowly but has transitioned now to mostly "Progressive" recording of digital video using a myriad number of compressed or noncompressed encoding methods direct to data files, first on small portable hard drives, then custom solid state memory cards, and finally to plain old SSD.
Since the "Progressive" method has much more in common with actual celluloid "film" than older "Interlaced" video signal camcorders.. studios and broadcasters eagerly adopted the transition and in some cases no longer even use actual film as the budget proposition inevitably trades places as to which is more expensive.
While JVC and Panasonic and many others adopted the DVmini cassette, Sony invented the Hi8 format and there were a few other formats like VHS-C but the majority were DV centric and like VHS is what is mostly being transfered today. For the most part DV tapes are become scarce and hard to find except as New Old Stock (N.O.S.).
DVD to BluRay
PVRs and Streaming have mostly reduced the market and demand for personal storage, cell phone video is generally much smaller and uploaded to the cloud for long term storage. Or personal videos are kept on local hard drives and synchronized with long term backups. The storage format changes but is generally unimportant to the end user since transcoding on the fly has become common place in the background while displaying a file.
DVD as a long term format is being called into question, even using the M-Disc format, but even when burning a DVD-R it takes time, and the reliability and ability to scratch or damage a disc calls into question their viability. Also many DVD burner makers have left the market, Samsung, Sony even HLDT and LiteOn seem less likely to remain much longer. Blu Ray and Double Density Blu Ray as a movie storage format is strongly resisted and Blu Ray PVRs are highly restricted in the US from burning copies, or must comply with Burn Once rules.. meaning there are few Blu Ray PVRs and they must compete with Hard drive or Streaming storage formats in the cloud.
And all Optical burner and player media tend to have rubber belts which wear out over time, meaning optical drives unless recently manufactured will eventually become inoperable.. as demand goes down they are likely to increase in price.. so as a long term storage format, there are some serious questions to think about.