10/29/2023

Getting Intel NUC DCCP 847DY -DYE Bluetooth working with Headphones for Windows 7 x64

This took a long time to figure out. Here are the basics;

The NUC arrives with [ Intel(R) Centrino(R) Advanced-N 6235 ] its a dual Ethernet Card and Bluetooth Adapter.

When you install the Ethernet drivers, it does not install a Manufacturers driver for the Bluetooth Adapter, instead it installs a generic Microsoft driver which does not include Bluetooth Profile Providers for the various functions of the device. Thus you could pair a headset with the adapter, it would show up in the Devices and Printers section, but it would have no profile providers and not appear as an audio device.

The fix is to find a complete manufacturers Bluetooth driver with Profile Providers and then pair the device.


 

TLDR - get the Lenovo 

Intel(R) Centrino(R) Wireless Bluetooth(R) 4.0 + High Speed Adapter Software for Windows 7 (32-bit, 64-bit), XP - ThinkPad T431s, X230s

Its large,  298 MB but its worth it.

It comes from 2013 but it does install, separate and side by side to the existing and working Ethernet driver and it installs a complete set of Bluetooth Profile Providers.

A few tips;

Any pre-existing paired devices will not be "fixed" simply by installing this after they are paired and showing up in the Devices and Printers.. you have to remove them, reboot and pair again.

During the install there is an EXE and an MSI phase.. so it looks like its installing multiple packages using multiple methods.

After package install and pairing, Device Manager will show an anemic looking "Bluetooth Audio" device, which is a little odd for a Bluetooth device the Bluetooth Profile Providers don't usually show up in Device Manager.

All oddities aside this Lenovo driver package seems to follow the Bluetooth design guidelines to a "tee" and integrate with Windows 7 x64 very well.

Final Tip!

When downloading from Lenovo, it will popup a box asking for a Serial Number, lots of text .. ignore all of it. Go to the very bottom and click [Cancel] and a banner will appear over the same page you were launching the download from. It warns this software may not work with your hardware - click the download link again, it will begin downloading.









10/20/2023

Qnap vs Synology - NAS Storage systems

From my perspective; both are based on Linux, both are companies based in Taiwan. The QNAP OS (Qnap Turbo Station) is intuitive for Apple Mac users, The Synology OS (Disk Station Manager) is intuitive for Windows users. 

Qnap is more a build it yourself solution that lets you select from many hardware options and sticks to EXT4, everything is an add-on. It supports more traditional RAID levels and can appear bare bones. RAID drives need to be identical in size.

Synology is more a complete solution out of the box preconfigured hardware with fewer options, with access to BTRFS out of the box. RAID drives can be different sizes and leverage SHR (Synology Hybrid RAID) 1 and 2. So in that way its closer to having Drobo like features.




9/25/2023

Diamond GameCaster 1500 - working with OBS

 

The Diamond GC1500 is a  Fujitsu H5x hardware encoder chip based h.264 video capture device for HDMI and YPbPr video signals and HDMI embedded or Unbalanced Line In RCA red and white input jacks.

It offically comes with an Optical ROM with software drivers and capture software for recording or streaming.

However the Diamond website released a customized version of OBS Studio that supported the device.

When installed the custom driver appears as "HD Video Capture Device" and during install it throws up a warning asking if the signing entity of the device driver "KWorld Computer Co. Ltd" is trusted?

It is a 64 bit device driver and does install on Windows 7 x64 using Troubleshooting Compatibility Mode. It is detected as Window XP SP2 compatible.

The OBSKit.zip has a problem in that it both appears to be a distribution from a GIT hub repository, and has lingering traces of very long filename dot prefix files which interfere with normal installation. Compounded by .DS and other dot prefix files normally associated with Apple Mac systems.. if these are in place the installers will not work correctly and will choke, and when attempting to start OBS64.exe or OBS32.exe it will report .coreaudio or many other dot "prefix" files are not compatible with this operating system. The loaders scan the directories and assume they are all Windows files when they are version tags for other files and meta data for the Apple Mac HFS file system.

These extra files are by default "hidden" by the File view settings under Windows for Folder Explorer.

You open a Windows Folder Explorer windows, briefly tap the [ALT] key to get the old extended menu of options layered above the normal [Organize Open Share Burn New Folder] options menu ribbon.

[File Edit View Tools Help]

[Tools -Folder options...] then [View (tab)]

Under [Hidden files and folders] check the Radio button for "Show hidden files, folders, and drives" and the dot files will appear, you can select and delete them, now everything will pretty much work as expected.

Do Not - install a vanilla OBS Studio install , it will be unable to see the device or make use of its output, the OBSKit.zip is very specific to the Diamond GC1500 video capture device.

When opening the OBS64.exe it may already have a video capture device preconfigured.

Most of the defaults are okay, but the audio settings can make or break a capture setting.

Setting it to - "Use custom audio device" will stop the video

It is not possible that I can see to play audio during capture Preview simultaneous with capture.

This may be a concession however since the device was meant to work over USB2.0 and simultaneous playback and recording with USB2.0 can produce too many interrupts to reliably capture audio in sync and playback in sync without overloading the system and eventually loosing audio video sync.

Instead, use the HDMI pass thru feature to "Monitor" the video on a separate playback device like an HDMI monitor or TV.. and you will be able to experience audio and video in sync.

The capture device however needs to remain in [Audio Output Mode ] -[ Capture audio only ]

Recordings are very smooth over USB3.0 and presumably a USB2.0 port, and will produce flv or mp4 files with h.264 encoding and AAC LC audio.

The documentation on using "Custom" OBS with GC1500 is very sparse.. the reputation of the Game Capture device is not great.. I think because the of the language barrier and poor documentation in bringing this to market. But the device is very good. Several capture devices were built on this framework of chips.

This device seems rather rare compares to earlier EMPIA GC500 models or the similar H5x GC1000 models and very different from the standalone flash and streaming P GC2000 models

9/20/2023

DV and over sampling 4:1:1 is it really loosing color information?

 720x480 is the standard for Standard Definition NTSC signal sampling.

That's 720 samples per horizontal line.

That's 480 samples per vertical line. 


When Analog signals for video are measured they are defined by the number of vertical lines side by side which can be distinguished from one another.

That is if you took a bunch of vertical "bamboo sticks" or "straws" and stood them up on end and lined them up shoulder to shoulder next to one another.. then stood way back from them. How many could you stand up side by side, shoulder to shoulder, next to one another before they appeared to "blur" together and you could no longer distinguish them from one another?

If you have only Five and spread them evenly, again shoulder to shoulder, so there were spaces evenly spaced between them.. you would have a better chance of seeing you have "five" from far away.

But as you stack more and more side by side.. decreasing that even space between them.. they crowd together.. until visually they seem to "blur".

These are called "Vertical Lines of Resolution" or how dense can you stack a forest of Trees (or straws) until its meaningless.

A VHS tape typically can produce a signal with 320 lines of vertical resolution, a Broadcast signal can produce something closer to 500 lines of vertical resolution.

With a Digital sampling of 720 points along a horizontal rod laid across all of those straws, means your over sampling by a factor of 2 samples per each straw.. which can have issues of aliasing.. or interference patterns.. but generally smoothing and anti-aliasing can compensate for this interference.

So basically in a perfect scenario 320 samples per digital horizontal lines is enough for a VHS tape.

DV takes a grid of 4:1:1 or 4 luma points along a horizontal line per 1 color sample on that same horizontal, and 1 color sample per vertical field line

DV is standard video only, so it samples on the field not the frame so its sampling over a tvdl 320x240 with a sample grid of 720x240.

If you convert this into progressive (assuming fields into frames) before digital sampling this becomes

720x480 over 320x480

Scaling 4:1:1 and reducing for redundancy means  2:1/2:1

That means for a 720 dot line, two samples per Luma, and 1 sample  per two chroma, the third is 1:1 so it is loss less.

The actual loss depends upon the signal resolution being near perfect at 320 which includes (overscan) normally not seen because of CRT bezels and thus normally avoided in televised or recorded SD content.

And if not using SP speed but LP or EP speed it can be even worse.

It should also be realized the NTSC signal saved on VHS tape is recorded as a Color Under signal and reproduced from that Compression scheme.. meaning it is already loosing 1/2 the color horizontal dimension.. so 2:1:1 .. it is possible the claims of 320 Chroma tvdl may actually be claimed, but its more likely that is for Luma only.. and the Chroma is actually 160 in a best case scenario.

It may "seem" like a reduction in color sampling from analog to digital, but in reality it is over sampling only the Luma and sampling the Chroma in a 1:1 ratio for available signal from a VHS source.

Sampling a digital conversion from a higher resolution source such as a Betamax Composite or Broadcast signal may reach for up to 500 lines of tvdl on paper.. in a studio.. but in real world scenarios over losses over a transmission line or due to broadcast and reception on less than perfect equipment will bring that down substantially. S-VHS, S-Video and EP speeds may claim to capture more of the signal on tape, but cumulative losses are likely to claim some of that resolution.. some of which must be sacrificed to Timebase Correction and Frame Sync corrective actions from older tapes.

With Color Under NTSC transmission;

If 4 is the Luma for an analog signal and is 320 and we are over sampling by x2 for 720 we have the potential for aliasing and the potential for using the extra information for slight sharpening.

If 1 is the Chroma for an analog signal and it is actually 160 from the Analog signal and its over sampling by x2 - "nothing is being lost" - the capture resolution is actually 1 to 1.

If 1 is the Chroma for an analog signal per "field" and not frame because it is DV and only works on an interlaced signal - "nothing is being lost".

4:2:0 makes better sense when converting a real 4:4:4 situation and favors the vertical dimension, but more over it  compresses pscyhological teasing out intraframe compression opportunities, but suffers from Macro blocking artifacts.

Mosquito noise is an artifact of condensing or composing a Progressive frame from two Interlaced fields with a separation in the temporal dimension and get worse with high motion, or low motion with sharp edges, which is where it is most often observed.

MPEG-2 or h.262 introduced Macro blocking and inter-frame and well as progressive compression opportunities with different complexity profiles.

MPEG4 part 10 AVC or h.264 introduced golden ratio spiral or more complex search patterns of psycho aural visual inter-frame as well as progressive compression opportunities, again with different complexity profiles.

DV was appropriate for its time, but of very low compression advantage, while retaining the best picture available from signals of its time. It would never be appropriate for HDTV or HD signals of today.

But it gets maligned quite often as an inferior capture format, when in reality, considering the signal source, its more than adequate. It was also widely adopted by many of the operating systems of the day and remains a simple format to decode and present.

MPEG-2 was more appropriate for studio film conversions to digital and storage on DVD playback media, it was a little over hyped for its time, but over delivered for systems not ready yet to deal with the format, it was anticipated to deal with the better quality signals available from Component or highspeed Cable or Fiber networks, leaving the SD and S-Video / S-VHS era behind.

Microsoft came up with their own version in VC-1 to circumvent licensing issues, and partially due to those licensing issues Apple chose to sponsor and develop the h.264 MOV/MP4 standards. This bifurcation led to a lot of market confusion. Video format wars of the VHS/Betamax/Laserdisk or Blu-ray/HDVD formats did not help.

The format wars insured that the OS vendors would eventually opt out and natively include neither.. but marginally continue to support DV and the more open h.264/MP4 standards which were not license encumbered.

Microsoft stayed with VC-1, and optionally bundled licensed MPEG-2 codecs and decoders for a fee. Microsoft formerly abandoned the nonlinear editor business after offering Windows Media Encoder 9 and Windows Expression Encoder 3.

Apple went with h.264 MOV/MP4 for free but also offered Professional addon codecs with nonlinear editor suites like Final Cut X.

Microsoft in later versions of windows began including h.264/mp4 as a natively supported codec with little fan fair.. as it offered no competitive advantage.. and withdrew the limited MPEG-2 codec support by withdrawing Windows Media Center from the market. Ceding most nonlinear editor business to Adobe/Premiere and other companies.. such a Grass Valley/EDIUS or Blackmagic/Media Express, AJA/Studio, (former SONY) Vegas .. among others.




 


8/31/2023

Cloner Alliance Pro - ProcAmp - Bright, Contrast, Saturation

The Cloner Alliance Pro Box has a built in Processing Amplifier for adjusting the Brightness and Contrast, as well as the Saturation of video that it records. But its difficult to access from the onscreen display.

The Cloner Alliance Pro Box has inputs for HDMI and Component Video and stereo Audio capture. It has three main audio video ports, the HDMI in, HDMI out and the AV port (which is used with a special breakout cable to accept Component video and audio).

The HDMI out is plugged into a TV or monitor and carries sound as well as video.

When initially configuring the Cloner Alliance Pro stand alone mode, it has three buttons on the front of the box and  an IR port for use with a hand held remote control.

The three buttons choose "source" one of the HDMI/Component ports, trigger a "screenshot" capture and start and stop a "record" session to a connected USB storage device.

Detailed configuration requires using the hand held remote and the HDMI out "on screen display".

The hand held remote have a circular dial in the middle of the remote surrounded by four buttons; "i" for information, "camera" for snapshot, "back arrow" for return, and "house" for home menu.

Detailed configuration has to be performed by navigating the on screen display using the remote.

The most important button is the "house" or home menu button, this brings up an on screen overlay of a transparent vertical up and down menu.

At first the menu is centered on the "First" lateral (or left <-> right menu) from four menus.

This first menu has the basic settings for selecting capture resolution, file format and the bare minimal things required to allow the on box start and stop "record" button to work.

Once starting to travel up or down the menu using the up and down arrows on the circular wheel, you are "locked" into that menu and cannot move laterally to any other horizontal menu choice.

Reading some onscreen tips you are instructed to "break out" of this "locked in" menu choice, you should hit the "back arrow" or return button on the hand held remote and you will exit that menus "locked in" mode and be able to press the "left and right" arrows on the circular wheel of the remote to travel to a new major configuration menu.

The Processing Amplifier for adjusting the Brightness, Contrast, Saturation of the image displayed and to be captured as video.. is in the Second major lateral >> Right menu 

note: the "Contrast" setting seems (inverted) to normal behavior, increasing it actually mutes the image and makes things less "contrasty". So the best setting is probably to reduce the Contrast from 100 towards 0, where 40 is a fair setting. Upon power up the firmware presets these to defaults that are often less than optimal and make the video appear dark and hard to see. Increasing Brightness to 100 and then setting the Contrast to something less than 50 will brighten and then pull out detail by making the picture appear darker where needed to make the picture appear to have more contrast without appearing too dark.

Remember!

If you start moving Up and Down in the "First" menu, you will not be able to move to the "Second" menu just to the Right, until you hit the "back arrow" on the hand held remote to "Exit" the "First" menu mode.

Simply Pressing the > Right arrow on the hand held remote will "do nothing".. and keep you trapped in the First menu. The button press will be ignored.

Only coordinating and Pressing "first" the "back arrow" button, and (then) the "> Right" arrow button on the perimeter of the circular dial on the hand held remote, will leap you out of the "First" menu and traverse to the "Second" menu.

The "Second" menu contains the Brightness, Contrast, Hue, and Saturation controls for the currently displayed video image visible partially behind the menu overlay.

Now you can moved up and down to select one of the configuration options and increase or decrease its value, the results will be immediately reflected on the video playing or onscreen behind the semi-transparent menu.

These options are saved when exiting all menus by pressing the "house" or home menu button.

But they (do not) remain set across restarts or power down and power up events of the Cloner Alliance Pro Box. The Time also is not saved across power down and power up events of the Cloner Alliance Pro Box. They all have to be reset manually each time the device is powered on.


7/23/2023

Video Tape Capturing and Saving to DV, VC1, h.264 natively

I was playing with ChatGPT and discovered maybe some little known facts about Windows 7.

Capturing to native DV format produces a high quality file losslessly with slightly degraded color sampling, 4:1:1

However since the NTSC format is recorded to tape using a slightly degraded color sample by way of using Color Under sampling.. assigning less bandwidth to the color component of the signal UV than to Y, the conversion from a lessor color signal to 4:1:1 digital sampling is essentially the same thing. And nothing is lost, there simply isn't any more signal bandwidth available to carry more information that that collected by 4:1:1 digital sampling.

DV video does a noise reduction step passing a DCT - Discrete Cosine Transform over the signal, but does not 'compress' or reduce the actual number of digital samples in the Y or UV axes, this results in a large digital sample file, but should retain as much of the original signal as possible for playback.

DV does not in anyway attempt to up sample or 'invent' more information than is actually present in a signal, nor does it attempt to convert from field to frame (otherwise known as interlaced to progressive conversion).

Other compression schemes like MPEG-2 (h.262) or h.264 treat color samples somewhat differently since they were invented for film to digital conversion and the available signals when converting from film to digital video were captured using full frame cameras in a progressive to progressive conversion format. the full frame digital cameras had 4:4:4 available to them and MPEG2 or h.264 could take advantage of the higher available color sampling and down sample the color space to 4:2:0 for retaining slightly more available color signal than a Broadcast or VHS tape Color Under sample which never had anywhere near the same original bandwidth due to the NTSC remote transmission standard.

When converting from VHS tape to digital files became popular, using the latest technology was considered more attractive and owning one device for capturing as opposed to many limited the popularity of a basic DV converter.

Windows went from mostly using DV capture tools like Windows Move Maker, to Adobe Premiere Essentials and Pro, Sony Vegas, Canopus DV or Grass Valley EDIUS in the 98 thru 2000 and XP editions.

Windows 7 followed up with tools such as Windows Live Essentials installed separately which could capture in DV format, and then quickly "convert" the Lossless DV file format into Windows WMV (VC-1 .. basically Microsoft answer to h.262 or MPEG-2.. or Windows MP4 h.264 AVC) formats.

The key was not in the Save Project menus, but the [Save Movie] menu and choosing the delivery type and format. Save as DVD Burn would convert to VC-1 and automatically transfer to Windows DVD Maker to guide thru setting up a Menu structure for a Disc and then burning to a DVD disc. Choosing a delivery type of [For Computer] granted the option of giving the file to be saved a specific file name and file system destination, and then choosing a file type [MPEG-4/H.264 Video File (or) Windows Media Video File - WMV/VC1]

These tools while they could be used with a DV bridge were more targeted for use with DV technology for offloading or uploading DV Camcorder videos and then converting them to DVD or File consumer types.

They soon upgraded to working with MPEG-2 hardware capture devices since those were more popular for capturing Broadcast or Cable video files where commercials were routinely edited out and the delivery types were stored on low capacity storage devices like hard drives or limited DVD media until consumed.

The MPEG-2 hardware capture types were of lower absolute bandwidth than DV firewire, and could finally be handled not only over PCI 2.x and USB bus types eliminating the need for high royalty hardware that required firewire .. but in theory could capture slightly better color signal if the transmission type were over a medium that were totally digital, as with ATSC instead of NTSC.

Some of the details got lost in the translation and a semi-cult status of video capture in 4:4:4 uncompressed formats erupted over the intervening years, as non hardware encoding field and frame grabbers became cheap and widely available. They did not add anything to the quality of the signal capture and in fact complicated many problematic signal captures where previous hardware encoders had special time base and frame synchronizing abilities that simple frame grabbers did not.

4:4:4 was and remains the capture domain of film and studio on site rather than something easily mastered at home.

h.264 and later compression techniques were developed initially for the purpose of enabling mobile phone and internet video delivery; but later profiles enabled matching and exceeding the delivery quality of MPEG2 and have all but replaced them.

The initial age old question of DV versus MPEG-2 or h.264 capture options side steps or 'hides' the question of archiving in 'Interlaced or field based format' versus archiving in 'Progressive or frame based format'. DV from the outset was 'field based' and is based around SD or Standard Definition 720x480 video.

Later versions of DV such as HDV utilized a form of MPEG-2 compression without the interfield/interframe temporal distortion inherent when converting a field based video to a frame based one, and the unavoidable artifacts that result from that step. HDV however was never popular and very short lived. 

The arrival of TiVo and DVD recorders and their popularity, and the arrival of EyeTV and Windows Media Center eventually led to the broad assumption that MPEG-2 and DVD formats were superior to DV or HDV for all purposes. After 2009 and the conversion from NTSC to ATSC formats most TV transmissions became Progressive with ever enlarging frame sizes much higher than 720p .. leading to further interest in high compression ratio codecs like h.264 and presently h.265

The conversion step from DV to MPEG-2 was not without forethought, color samples are 'co-sited' withing a field and frame so that minimal color aberrations are possible. Also MPEG-2 formats allow for profiles that are 'interlaced' as well as 'progressive' .. and later h.264 formats also allow for field based video storage formats.. which are rarely used or supported in normal hardware playback equipment.. leading back to the observation that only a sophisticated and complete software playback system can hope to cope with archaic full resolution and un-temporaly distorted playbacks of old content that are artifact free when the content has been compressed, especially if it has been converted to framed based 'Progressive' formats with no way to undo the irrevocable choice made in the era in which it was 'mashed together'.

This all leads to me to personally conclude, for VHS or possible Betamax video capture to an archival format; DV is probably the best format. MPEG-2 and h.264/h.265 formats are smaller and are still serviceable delivery formats.. and because of their size and rapid portability may make survival of 'a copy' even if not the best quality available more likely.

This has to be taken as a recommendation for SD content only, since DV did not support larger frame sizes. And all content after NTSC to ATSC would already be in Progressive format with a larger color sample budget.

Dealing with DV bridges inevitably leads to questions of time base correction, frame synchronization and how to use a firewire device in Windows 7 or later operating system.

Time base correction and frame synchronization is a prerequisite to submission to a hardware codec device, even a DVD recorder.. so effectively a DVD recorder with pass-thru processing is a combo tbc+frame synchronizer. Timebase correction is mostly about detecting a VHS versus Broadcast signal to compensate for the noisy head switching at the beginning and ending of a field of video to prevent 'flag waving or tearing' at the top or bottom of the screen. These are often hidden by bezels on CRT monitors as they are in the overscan region of the picture but become visible when watching the video in a video preview or playback on a digital monitor. 

Swimming or variable length drift between horizontal sweep lines must also be corrected by devices capable of detecting this and reformatting the time of the lines relative to each other through a process known as "Velocity" compensation to square up and make uniform the stack of horizontal lines from top to bottom in a field and frame.

Frame synchronization is more about coping with a damaged or incomplete field or frame of video and automatic recovery such that it always delivers a 'standard' frame rate. Whether it does that by repeating a frame to make up for the unusable remnants, briefly inserts a black, blue or blank frame into the stream, and whether it attempts to simultaneously throw out audio samples to keep audio and video in lock step.. are all design decisions or settings options unique to the specific frame sync being used.

DV bridges could react to a loss of signal, by aborting or end a capture, ending and staring a new one as soon as a signal becomes available again, or pausing a capture and requiring manual intervention by a human being. Some would even make a time code note so these could be revisited and addressed later.

Firewire DVRs were not common in the United States, LG and Samsung provided a couple examples, but by then TiVo, EyeTV and Windows Media Center had claimed the larger share of the public business. Formac (UK), I-O Data (Japan), and EyeTV(mostly EU countries) did dabble in Firewire Tuners relegating the DVR piece to software on a connected PC but for the most part people were largely not aware of their existence.


7/21/2023

Playing audio from Adobe Audition 4.0 - Win7 a "better way"

Found this "related" work around for later versions, and it makes a little sense to me now.

Normally I disable the built-in Realtek speaker and microphone device in Playback and Record sound devices, and leave only the Bluetooth headphones and Bluetooth headset [enabled].

Blueooth Headphones usually (do not) have an audio Input microphone, but Adobe Audition tries to set that as the Input.. the Audio Hardware drop down doesn't list any other possible Input source because I have the built-in Realtek audio source disabled.

Bluetooth Headsets are usually for gaming and have a mono or stereo Mic Input .. and thats probably why that work around works. Audible is able to sort or detect both and things just work. I would suppose something goes wrong when trying to open the Bluetooth Headphones and only one stream, the Input is silent and it fails silently to figure out how to deal with them as an Output.

Simply, with the Realtek built-in audio Source set as the Input, and the Bluetooth Headphones set as the Output under Audio Hardware.. everything starts working properly.. and the Bluetooth Headphones are playing back in high quality with few to no dropouts.

Headset mode has limited Bluetooth bandwidth, and part of that is reserved for the Input channel, so dropouts and quality suffers quite a bit with Robot'ing frequently occuring.. but with the full Bluetooth bandwidth available when using Bluetooth Headphones mode.. there are no dropouts.. and no Robot'ing.

Smooth as silk.