1/03/2021

Categories for Render, Capture and Nodetypes Microphones, Line Connectors and Speakers

In addition to the KSCATEGORY_AUDIO Categories, the INF file can also "decorate" the device driver as having other interfaces that the builder might want to probe.

These can help in directing the builder to probe and fill out a description for the device when making its Audio Endpoint.

For some of these types it attempts to discover abilities and interconnected topologies so that the Endpoint makes better sense to the end user when complete.

Video capture and Audio capture applications typically try to "find" or make sense of the resources of a video capture card by strumming theses "inventories" of endpoints and interfaces when building up a list of choose-able sources and sinks of video and audio data.

Video Capture applications aren't very sophisticated when looking for sources beyond the current year in which they are written.. so a re-think or re-conceptualization of how to present usable resources can lock them out of running properly on newer re-thought operating systems.

"Some" device drivers hook themselves into the default audio or "desktop" when loaded.. some do not.. those that do are easier to support with legacy capture software because they merely have to capture from the default channels of the system.

UVA and UVC seems to be the latest "re-think" in the way video and audio information is presented as a resource to the operating systems and programs. And they are generally across USB serial buses.

Since Analog video capture is mostly long gone.. supporting moving between XP to Vista and 7 or 8 and 8.1 is unlikely to be something that can be done for much longer. Even the device driver signing methods and keepers of the hardware and kernel keys is consolidating at Microsoft.. and locking more and more of the developer community out.. which could lead to an upheaval in computer programming moving forwards.. abandoning Microsoft and Apple products for a more flexible and open system.. unlikely Linux since that too seems to have started blocking flexible development models. A stagnation of original thinking seems to be beginning a new dark age.

Ironically the XP operating system without hardware device driver signing.. is the easiest to develop for.. and the ReactOS may eventually prove to be the favored operating system of the future. Its hard to imagine Microsoft will mentor and shoulder the burden of solving problems for indiscernible and not immediately profitable motives long term.

Why Video Capture boards don't display Audio Endpoints in Vista and 7

I'm real fuzzy on this at the moment.

But it appears when going from XP to Vista, they re-thought how Directsound or Directshow audio devices were presented to programmers.

They invented something called the "audio endpoint builder service" which monitored the PnP insertion of device drivers, registering various characteristics about loaded device drivers in the registry or class tree.

When one added itself to the "KSCATEGORY_AUDIO" class category, it went to work "probing" the "device" attached to the system by inspecting its pins and their default parameters.. automatically constructing an "Audio Endpoint".

These virtual "Audio Endpoints" were conceptually thought to be "better" than using the Directshow API which was based on filters for accessing and manipulating the hardware on a device directly.

Instead, users and programmers were able to "think about" the Audio Endpoint as a fully fleshed out object which they could use more generic commands to control to setup an "Endpoint Session" which was assumed "Blocked" until it was "un-Blocked" and began streaming data, either In or Out of its device.

Once these Endpoints were constructed, they populated the Sound Control panels where they could be designated as the default Playback or default Recording Endpoint.. 

The reason video capture cards do not appear as Audio Endpoints, is apparently because they're device driver INF files do not "decorate" themselves as "KSCATEGORY_AUDIO" class type.. and the "audio endpoint builder service" ignores them and does not build up the Endpoints the Sound Control panels use to populate the choices in the Playback and Recording endpoint choices.

It seems the "pins" for In or Out should also be decorated with descriptions of whether they are WaveOut or WaveIn or some other type.. but mostly the rest of the Directshow description should map to parts of the constructed Endpoints, since the Vista revised "re-conceptualizing" allowed for supporting legacy communications protocols and interfaces.

So.. its possible their devices will just work once their device drivers are revised to carry the badge of "KSCATEGORY_AUDIO"

One potential problem however are "signed" device driver packages.. since they are supposed to come with a .CAT and certificate set that ascertains the device driver has not been tampered with.

It might be possible after the device driver is installed.. to manually patch the registry to "add" the device driver/device to the "KSCATEGORY_AUDIO" by adding its guid.. if thats how it works.. but might also need to do that for its Pins.

Win7 x32 seemed easier to avoid device driver signing problems for a test.. or self signed was at one time possible with Win7 x64


12/05/2020

Wis Tech WISGO7007 video capture 64 bit drivers

 

The Wis Tech GO7007 video capture hardware compression chip was put into a lot of Video Capture devices from 2002 to about 2009.

It was popular during that time because is cost about $10 a chip in bulk and only performed a "Pre-Compression" to a stream format that could prepare it for formal protocol compression into a recognizable standard by a PC attached through a bridge chip like Firewire IEEE-1394 or USB 1.0, 2.0 or 3.0

It was specifically designed as a microprocessor with DSP functions without a specific format target, but also for use in PC devices and not as a single chip solution for a DVD recorder which many other companies targeted with much more expensive chips at the time.

Mostly the "GO stream" output format was a DCT - Discrete Cosine Transform output product which many of the recognized compression products already performed prior to actually performing dissection and redundancy elimination steps. It can be looked upon as a common 'root' step of filtering high frequency 'noise' out of a frame of video before overhauling it in greater detail to compressed frames and groups of picture to further reduce video stream size.

This lightened the 'load' on the PC bus used to bring the stream into the PC for further processing, be that a Mac or traditional PC.

To be sure this could be ISA, PCI or PCI express busses as well as Firewire or USB external connections.

This led to the term or idea of the Personal Video Recorder or (PVR) which gained traction at the time. As opposed to the more familiar Digital Video Recorder or DVD recorder.

Unfortunately with the shift away from 32 bit architecture to 64 bit architecture beginning with Windows Vista and later Windows 7 and 8, 8.1 and 10 many of these devices lost device driver support after Windows based on 32 architecture fell out of favor.

The Plextor ConvertX and ConvertPVR or ADS Technologies and StarTech derived products found themeselves without a device driver and many went up for next to nothing on places like eBay for those who continued to use 32 bit machines.

A couple of vendors like ADS Technologies did produce candidate 64 bit Vista device drivers, which do run in Window 7 x64, however they are hard to find online and ADS Technologies never finished to the point where the candidate device drivers were actually signed with a Microsoft Device Driver signing certificate and can only be used in Test mode.

StarTech which sourced a device called USB2TVTUNER "may be" the last vendor that had an actual robust 64 device driver with signing for Vista, Windows 7, 8 and 10 -- but it is very hard to find, and the 64 bit device drivers were only found after an exhaustive search in archive.org of a remote Mexican website for Doctors. The Startech website download for the 64 bit drivers themselves has long since been taken down and was not archived in whole to archive.org.

The WIS GO7007 was a chip that came with an SDK for Windows and later for Linux. Essentially like many designs of the time the front facing or Input section of the chip depended on a 'video signal preparation' chip, which could be as small as a single wire interface to a composite input, or as complex as a Intermediate Frequency multiplexer coming from a Broadcast Digital Adapter or 'TV Tuner'.

In general however the preparation chip would attempt to select which Input source to switch the inputs of the Wis GO7007 chip, and or route the signal around the chip if plain YUV video capture were the choice.

Then would come the Wis GO7007 chip and its processing to the stream, followed by an interface chip that supported some type of computer bus or firewire or usb. the Wis GO7007 had native support for HPI or USB interface chips and a general purpose interface to I2C or SPI (I'm not sure which). In those days a general purpose serial bus like the I2C or SPI were very popular on cards and external devices, since it required fewer traces to carry short distance serial communications between chips.

Then would come the Interface chip to the PC.

As far as I know the EyeTV 200 device was the only example of a firewire TV Tuner that used the wisgo7007 and it only worked on an Apple Mac with EyeTV software.

Many more examples of wisgo7007 use were available on the PC.

But Pinnacle has a fairly large collection with offerings for both the Mac and the PC, including many TV Tuner devices through the EyeTV software.

On the PC side WIS GO7007 is less known, but well represented. As mentioned the ADS Technologies had the DX2, and possibly the Hauppauge PVR line has some WIS offerings. Startech carried the USB2TVTUNER. And the Plextor line covered both the Mac and PC lines with multiple variants on the ConvertX and ConvertPVR standalone and tv tuner designs.

The general device driver usage of the devices were the same. First go7007 firmware 'blobs' were uploaded to the Wis GO7007 chip and reset to boot as well as other peripheral chips were programmed for their state before assuming the video capture position, and finally commands were sent over the Mac or PC attached bus to commence video capture.

Most Mac or PC attachments operated in in the Master Slave mode, where the video capture device either looped processing frames until data was retrieved, or waited silently for commands to send data.

The actual bus architecture could present possible problems in maintaining capture rate and simultaneous audio and video captures to prevent lip sync problems.

One of the major benefits of using an all in one audio and video capture chip were in that it could decide and take actions to prevent lip sync problems de-arbitraging and simplifying latent decision making processes in code on the Mac or PC, and leading to a fairly stable capture experience.

Hardware encoder chips that dedicated to one capture format or another, and that did not act as general purpose Digital Signal Processors were costly and less adaptable as new standards came out. Throwing the 'baby out with the bathwater' each time a new MPEG format or profile was announced was a common occurrence.

Whether it was a Hardware or Software, or Para-hardware solution however, a great deal of processing power had to be used to convert the analog to digital signals.. and this typically threw off a lot of heat.

Some busses could power the capture device from the bus connection, like USB or Firewire, others could not and required a separate external power supply further entangling the cable set.

The WIS GO7007 had a processing amplifier (or proc-amp) in its driver set which could change the signal processing capabilities of the chip on the fly. Including brightness, contrast, saturation and sharpness. Quixotically.. this last parameter was set to (blur) or very low in the SDK and often left that way. An unobservant device driver writer would over look this 'bad' setting and result in a default video capture bordering on 'Terrible'.. but at the incredible price of $10 a chip.. it was assume par for the course.

Changing this value manually within the capture driver after startup (which always resets to the 'terrible' setting on reboot) provides some [Spectacular] video capture results.

How to Turn Bluetooth Radio Back on, How to Bring the Bluetooth Systray Icon Back

 

How to Re-Enable Bluetooth Radio when its disabled
and the Bluetooth Systray Icon disappears
or fails to appear on Reboot

1. Open > Control Panel
2. Double click on "Bluetooth Devices" < NOTE: "Devices not Settings"
3. Open "Tools" Menu, go to the last option called "Bluetooth Settings"
4. On the "General Tab" < the Default when opening "Bluetooth Settings"
5. [Checkbox] the Empty [_] "Turn Bluetooth Radio On"
6. Click "Apply"
7. Click "OK"

How to Restore the "Bluetooth System Tray" Applet Configuration Icon

1. Mouse to the "System Tray"
2. Right Click on a Systray Blank area to get "Properties"
3. Click on "Customize notification icons
4. Uncheck box [_] "Always show all icons and notifications on the taskbar"
5. Find the "Csr Bluetooth TrayApplication / CSR Bluetooth"
6. Change the Behaviors to "Show icon and notifications" (not) "Only show notifications"
7. Recheck the  [_] "Always show all icons and notifications on the taskbar"
8. Click "OK"
9. Click "OK" to dismiss both windows
A. At least Logoff and Login to refresh the System Tray Applet Icons
B. Reboot for good effect and to initialize a proper startup order of Radio and System Tray startup

10/19/2020

4:1:1 more appropriate for 4:3, 4:2:0 more appropriate for 16:9

 In the last posting in February I came down on the side of 4:1:1 for regular normal (plain) VHS resolution capture.

I still think that is true, in particular for high velocity action sequences and where post capture Editing of the captured material may be a possibility.

 But also, the geometry of traditional SD capture renders the pixel Aspect ratio as 4:3 which is much closer to a "square" shape, than a 16:9 wide-screen pixel Aspect ratio.

 So for two reasons, 1. the limited TVL horizontal resolution of the VHS format, and 2. the shape of the pixel due to Aspect ratio.. choosing a 4:1:1 capture format may have advantages over a 4:2:0.

I am vaguely aware of a difference between pixel "display" Aspect ratio versus pixel "capture" or sampling Aspect ratio.. and that there is a terminology assigned to those differences. It can become quite confusing.

 It comes up when trying to display a wide screen formatted video in a viewer or display device without a proper attribute in the file header to indicate the intended shape of the pixels to be displayed.. aka the "wide screen attribute".

This leaves me with thinking (in simple terms) that reserving DV capture for VHS and MPEG2 capture for S-VHS is an appropriate choice.. but especially when capturing Wide-Screen movies or video that is in the wide screen display format.. it is literally focusing more of the horizontal color sampling on the horizontal wide screen major axis in that case.

Broadcast video is a whole other situation, but video capture from a VHS tape is vastly different from video capture from a Broadcast signal.. since there actually are more TVL resolution in the horzontal axis to be captured. In that case it becomes a question if 4:2:2 or 4:2:0 is the better choice since 4:1:1 would clearly not be preferred.

In the days when Broadcast signals were over NTSC this would be an interesting debate, but in the current era where transmissions are "digital" and mostly MPEG-TS or MPEG-PS it makes more sense to just capture the digital signal for storage, rather than converting back to Analog and then capturing to Digital.

 

2/10/2020

4:1:1 vs 4:2:0 for VHS to Digital Transfer

I'm still wrestling with the 4:1:1 color sub sampling versus 4:2:0 color sub sampling.

4:1:1 is used with DV (Digital Video) equipment, sometimes called DV25

4:2:0 is used with MPEG2 or DVD-video

DV is what was used for the PC and Mac standard camcorder equipment, and subsequently the DV or AVI files way back in the 2000-2004 time frame.

DVD-video came about for transfering film.

Basically 4:1:1 refer to an imaginary 4 x 2 sampling realm of 4 samples in two rows, for a total of eight pixels.

The second digit "1" refers to the fact that only one color sample is taken for every four pixels, but there are four pixels samples for their bright or darkness values. When reproduced for viewing that one color sample is spread out over the four pixels for that row. This saves on the amount of information that must be stored and makes the files smaller.

The third digit "1" refers to the fact that only one color sample is taken for every two rows of pixels vertically.. or in fact "Every" row gets one sample out of four pixels per row. In the vertical direction a sample is take for each row. That is there is more information sampled in the vertical direction, than there is in the horizontal row direction. This does not save on the amount of information that must be stored, but there are fewer rows (or lines) than there are virtual horizontal pixels.

This is known as a type of virtual compression.. or economy savings by "sampling" a courser grid of color pixels which are used to stretch over the higher resolution bright and dark grid of image pixels.

VHS and S-VHS (not to be confused with s-video) are two methods of recording Broadcast video to tape.

VHS has a virtual horizontal resolution of pixels along a horizontal line of about 250 pixels

S-VHS has a virtual horizontal resolution of pixels along a horizontal line of about 500 pixels

VHS has about 1/2 the horizontal tv vertical line resolution of S-VHS

Each of these is "less than" the sampling rate of 720 pixels.

When you digitize an S-VHS tape the number of sampling pixels along that horizontal line is more important for S-VHS than it is for VHS.

So

4:2:0

Indicates for the same four pixels horizontally "2" refers to the fact that two color samples will be taken.

But for the two rows vertically "0" indicates there will not be another or different sample taken for the second row, reducing the color sampling in the vertical direction.. effectively cutting in half the vertical resolution of the image color "wise".

4:2:0 may be more important when digitizing an S-VHS tape, than 4:1:1

But 4:1:1 maybe as good or "better" overall for VHS tapes, since 4:2:0 would be over sampling the same low TVL virtual horizontal resolution.. and VHS simply does not have anything more to give in the horizontal direction. Over sampling can better handle noise rejection, when filters are applied.. but it would not enhance the picture any better by itself. Color resolution in that direction would not be lost, and it would in fact be enhanced in the vertical row direction since more absolute color sample would be taken.. even when sampling at 720 x 480.

Also for VHS, capturing in 4:1:1 does not introduce long GOP frame dependencies making editing far easier, and less CPU intensive.

4:1:1 is optimal for capturing fast moving and unpredictable motion, where as 4:2:0 does not handle it well and requires multiple passes to reduce artifact errors when capturing high velocity motion changes.. reallocating bit rate at the cost of other parts of the picture.

4:1:1 is a higher bit rate and constant bit rate for a reason.

When transferring a Broadcast signal with a very high TVL (tv vertical line) resolution, or virtual horizontal resolution.. a case can be made for capturing only in 4:2:2 color sub sample space and then reducing it to 4:2:0 through a multi-pass process "after" editing to produce a smaller distribution format file.

When transferring an S-VHS signal with a very high TVL, 4:2:0 may offer benefits over 4:1:1

But when transferring a VHS signal with moderate "lesser" TVL, 4:1:1 should offer similar, if not superior picture quality due both to the equivalent or more appropriate color sub sampling in the horizontal direction, as well as the definitely superior (double) the color sub sampling in the vertical direction.. and the full frame non-GOP capture format.

In addition, after editing the 4:1:1 format can still be reduced through a multi-pass process to 4:2:0.

Therefore:

4:2:2 is without a doubt superior for Broadcast and S-VHS transfer

4:1:1 is arguably better for VHS transfer

And

4:2:0 remains a viable distribution format

4:2:2 could also be used  for VHS transfer, but as an "Oversampling" tool for further filter and noise rejection methods to extract the best picture possible.. since it requires specialized equipment and faster capture hardware.. its benefits over 4:1:1 may be marginal at best

If no noise filtering or further preparation before editing to take advantage of the "Oversampling" of a VHS tape.. 4:2:2 would be adding unnecessary strain on resources for little gain.

Recording direct to 4:2:0 has advantages when dedicated 4:2:0 compression hardware is present and the intended target includes eventually producing a DVD-video anyway.. with minimal or course editing, possibly using Chapter marks as defined by the DVD-VR specification for VOB units and menus. In fact the editing could be done away with entirely as VOB units can be referred to (without) actually editing and re-encoding the video stream.. sacrificing storage space, simply by including a series of virtual "skip" markers in the video menus, stringing them together as DVD-video Programs chains.

Which re-visits the issue and appropriateness when capturing Broadcast, S-VHS and VHS using 4:2:2 or 4:1:1 and either directly recording to 4:2:0 for simplicity and expediency sake, or Professionally capturing as 4:2:2 where the source material still retains a greater 400-500 TVL horizontal resolution. Or Consumer capturing as 4:1:1 where the source material is a lessor 240-250 TVL horizontal resolution who will also plan some level of editing and post production on a shoe string budget.

4:2:2 can be thought of as Z:X:Y

Where Z: is the size of the grid defined as 4

Where X is the number of samples along the horizontal line, as in how many samples per line

Where Y is the number of different samples vertically in a line spanning both rows, at each X sample point.

4:1:1 has 1 sample per horizontal row, but only 1 different sample between each sampled horzontal

4:2:2 has 2 sample per horizontal row, but 2 different sample between each sampled horizontal

4:2:0 has two sample per horizontal row, but 0 different sample between each sampled horizontal

Another way of describing it:

4:1:1 has 1+1 = 2 different possible color samples
4:2:2 has 2+2 = 4 different possible color samples
4:2:0 has 2+0 = 2 different possible color samples

The difference is the direction of "emphasis" when color sampling.

4:2:2 "emphasizes" both directions
4:1:1 "emphasizes" the vertical direction, and captures more fine detail in the vertical direction
4:2:0 "emphasizes" the horizontal direction, capturing more fine detail in the horizontal direction

normally greater horizontal resolution is better, especially in the monochrome bright and dark field of vision, where color is reduced in that axis.. meaning VHS color reduction is not perceived as much as the greater monochrome bright and dark resolution of an S-VHS higher resolution picture

Its appropriate to scale the color resolution with line resolution, but just as valid to descale it when the horizontal vertial line resolution (TVL) is reduced as well. Its practically a zero sum game.

Thinking along these lines:

Use of

4:2:2 for S-VHS transfer
4:1:1 for VHS transfer

would seem appropriate

Going futher

720 x 480 x 4:2:2
360 x 480 x 4:1:1

Might make some sense, reconstructed with the appropriate aspect ratio for viewing

By convention however

720 x 480 x 4:2:2
720 x 480 x 4:1:1
720 x 480 x 4:2:0

are more common

Or

720 x 480 x 4:2:2 for S-VHS
720 x 480 x 4:2:0 for S-VHS distribution

720 x 480 x 4:1:1 for VHS
720 x 480 x 4:1:0 for VHS distribution

Since VCD or smaller than DVD-video is archaic and unconventional these days. 4:1:0 probably is overkill where synthetic color sub sampling is concerned. And most programs do not contain a profile for it.

Stepping up to the lessor color sub sampling 4:2:0 post editing of 4:1:1 was (and is) a well known and practiced behavior in the edit room.

4:2:0 is also known as "co-siting" color samples, referring to the centering of the virtual coilor sample "spot" at the center of a square representing the area surrounding the sample spot that will inherit the same color information upon reproduction. For aspect ratios where the pixels are square this is optimal, but most aspect ratios are not reproduced with "square pixels" and this is sub-optimal leading to "jaggies" or anti-aliasing problems in the color field conflicting with the monochrome information.. or "color bleed" depending on how bad the difference between the co-siting and the aspect ratio pixels. 4:2:2 handles this problem "well", 4:2:0 does not without additional post capture anti aliasing methods.. and some color artifacting is unavoidable.

4:1:1 actually aligns the color information "better" because the smallest ratio is 4:3 and goes up from there preferentially reproducing horizontal rectangles.

4:2:0 was more optimal when re-sizing or re scaling a wide film format for the 4:3 aspect ratio of consumer television.. that is it was not as noticeable.  It is more noticeable when capturing and reproducing S-VHS or VHS from 4:2:2 to 4:2:0 even though the color sampling is good.. artifacts are more likely in the color space.

4:2:2 consuimer camcorders and dslrs are just beginning to capture in this format matively

4:4:4 studio cameras are regularly capturing in this format, as are some higher end prosumer equipment