4/01/2021

What if we're Pinball Machines not Memory Machines?

 Thinking about thought.

We tend to categorize by like is like, associations.

So we think of our brain and our thoughts like a computer programming running on a central processor.

Its something we sort of intuitively understand, and is sort of like the machines we hold in our hands. Turn crank, procedural kinetic motion produces output.

Memory is a little more like referring to a pen and paper and referring to a book for past mechanical motions.. the results.. become memory.

And we think our memory is like a book, the end result of our past mechanical actions.

But what if its more like a result machine, not that it methodically copies and encodes inputs like writing on a strip of paper.. but more like a spikey soccer ball with lots and lots of spikes.. that the envionment twists and turns and tumbles and leaves it laying in a pseudorandom state.

Not in a methodical easy to copy and recompose manner.. but the end result of a wad of paper crumpled and torn, and shaped by endless unpredictable blows and interactions with its environment.

More like a 'pinball' in a pinball machine.. its not as much a copier of the inputs from the environment.. as it is a reactor machine that seeks merely survive to experience another day.. an endless purgatory machine endlessly suffering.. the best outcome is not to cease functioning.

To extract memories.. or something more portable to easy to download and upload then.

Might be much harder.

We might rather need to  subject this tumbling soccer ball to a virtual environment to observe how its current state 'reacts' to all of its past experiences and currently interprets that which we erroneously perceive as the 'past' at the present time.

In such a scenario then, the 'past' or 'memory' becomes an endless sense of partially true, but also partially self described recollections of what its been through.. its not so much what it remembers.. as it is how it dictates or understands as its current view of the past. So.. in order to 'recall' the past.. it must forma virtual environment.. a virtual machine inside its head.. and use its imaginary narrative pointer and point it at this virtual machine in order to play out scenarios.. sort of 'imagine its 10 years ago and I'm walking through my house.. what do I see?' Then the current brain or mind 'state' begins reacting to the virtual environment and feeding senses of sight, hearing, smell into its current inputs via virtual imaginary sensors and playing out the Theater as if it were a play.. and holding those experiences  in something akin to a temporary memory buffer in our heads.. much more akin to a linear computer memory buffer.. so we can use those experiences as a near term second set of senses that can act as a mini-brain in a virtual environment to model a scenario we are trying to solve for.. to predict a near term desirable outcome.. sort of a Monte Carlo situation where we try try again in our tiny version of minecraft to come up with a solution we think will be worth betting on an the real world with our bodies.

If true.. if you want to tap into that.

It might be to extract or copy a memory.. or make new ones by uploading.. you need to tap into the virtual machine used for near term temporary memory.. and the senses it uses for input and output.. be they real or imaginary.

The real senses.. using head gear and tactile feedback like mice our keyboards.. we kind of have a grasp upon.

The temporary virtual machine senses of near term memory, we kind of don't have a good grasp of.

Magnetic Induction, neuralink.. are pokes in that direction.. but until we have a more solid connection to the temporary memory virtual machine and can spawn a set of neuralink virtual senses which the brain will accept as 'like' those it creates on the fly for visualizing scenarios for solving problems.. it could be a tough road.

Video games and game console controllers are close, they can immerse a person in a world without physical contact.. even trigger an imaginary experience where we use our perceived memory of reaction based on our past experiences. In such a situation the first person shooter is literally building their own bridge to the outside world and reacting based on their memories.. downloading  in that scenario is akin to interrogating a player for what they recall.. in real time.. since their arms and legs.. can only move in real time.

But we know we can think much faster than physical action.. the effective bandwidth can be increased in a dream state.. abbreviating or hopscotching across a dreamscape to lessor or more important elements of the story.

And in fact we know we can Upload memories very easily.. by watching a movie. Again the brain finds a way to bridge the gap.

Given the way the brain uses glucose and oxygen.. it may be that a person can only upload and download at faster than real time in a subdued, physically inactive dream state.. in which they are paralyzed to optimize nutrients and oxygen and blood supply for the brain.



3/02/2021

Startech USB2TVTuner is based on EMPia Audio and EMPia Video chips Not WIS GO007

 

So it turned out the Startech USB2TVTuner is not a hardware compression based device. Hailing from the year 2002 and sold up thru 2007 it was basically a simple YUV digitizer with USB bridge to get the raw 4:2:2 from the capture chips to the software on a PC.

I found this out after examining a unit that no longer worked.. the hardware was busted in some manner. 

And confirmed looking into a device driver .INF file which only referenced EMPia capture chip technology.

This leaves the ADS DX2 Express and the Pinnacle dazzle AVC-130 and AVC-170 as the most common WIS GO007 hardware capture devices. But only the ADS DX2 had a not ready for prime time 64 bit device driver that was never signed and roughly works under Windows 7 x64 with difficulty.

It appears Micronas and then TDK acquired the WIS chip rights and it might have been taken off the market or folded into another portfolio.. but WIS based hardware compression capture chips disappeared after XP support basically ended for things like the Plextor M402 and related series.

ATI had a go at continuing MPEG2 hardware compression until the end of that company being acquired by AMD. ATI had the 550, 650 and later 750 chips with mostly Windows Media Center support under XP and then Vista.

Lumanate would produce the excellent Dell Angel MPEG USB series, which targeted the Windows Media Centers worked with Monsoon SnappySoft capture software.. and AMCap (after a fashion).

AverMedia of Taiwan and Hauppauge of New Jersey/China (?) are still offering products and had a long line of offerings both with recognized and individually proprietary and their supported versions of capture software.



1/03/2021

Categories for Render, Capture and Nodetypes Microphones, Line Connectors and Speakers

In addition to the KSCATEGORY_AUDIO Categories, the INF file can also "decorate" the device driver as having other interfaces that the builder might want to probe.

These can help in directing the builder to probe and fill out a description for the device when making its Audio Endpoint.

For some of these types it attempts to discover abilities and interconnected topologies so that the Endpoint makes better sense to the end user when complete.

Video capture and Audio capture applications typically try to "find" or make sense of the resources of a video capture card by strumming theses "inventories" of endpoints and interfaces when building up a list of choose-able sources and sinks of video and audio data.

Video Capture applications aren't very sophisticated when looking for sources beyond the current year in which they are written.. so a re-think or re-conceptualization of how to present usable resources can lock them out of running properly on newer re-thought operating systems.

"Some" device drivers hook themselves into the default audio or "desktop" when loaded.. some do not.. those that do are easier to support with legacy capture software because they merely have to capture from the default channels of the system.

UVA and UVC seems to be the latest "re-think" in the way video and audio information is presented as a resource to the operating systems and programs. And they are generally across USB serial buses.

Since Analog video capture is mostly long gone.. supporting moving between XP to Vista and 7 or 8 and 8.1 is unlikely to be something that can be done for much longer. Even the device driver signing methods and keepers of the hardware and kernel keys is consolidating at Microsoft.. and locking more and more of the developer community out.. which could lead to an upheaval in computer programming moving forwards.. abandoning Microsoft and Apple products for a more flexible and open system.. unlikely Linux since that too seems to have started blocking flexible development models. A stagnation of original thinking seems to be beginning a new dark age.

Ironically the XP operating system without hardware device driver signing.. is the easiest to develop for.. and the ReactOS may eventually prove to be the favored operating system of the future. Its hard to imagine Microsoft will mentor and shoulder the burden of solving problems for indiscernible and not immediately profitable motives long term.

Why Video Capture boards don't display Audio Endpoints in Vista and 7

I'm real fuzzy on this at the moment.

But it appears when going from XP to Vista, they re-thought how Directsound or Directshow audio devices were presented to programmers.

They invented something called the "audio endpoint builder service" which monitored the PnP insertion of device drivers, registering various characteristics about loaded device drivers in the registry or class tree.

When one added itself to the "KSCATEGORY_AUDIO" class category, it went to work "probing" the "device" attached to the system by inspecting its pins and their default parameters.. automatically constructing an "Audio Endpoint".

These virtual "Audio Endpoints" were conceptually thought to be "better" than using the Directshow API which was based on filters for accessing and manipulating the hardware on a device directly.

Instead, users and programmers were able to "think about" the Audio Endpoint as a fully fleshed out object which they could use more generic commands to control to setup an "Endpoint Session" which was assumed "Blocked" until it was "un-Blocked" and began streaming data, either In or Out of its device.

Once these Endpoints were constructed, they populated the Sound Control panels where they could be designated as the default Playback or default Recording Endpoint.. 

The reason video capture cards do not appear as Audio Endpoints, is apparently because they're device driver INF files do not "decorate" themselves as "KSCATEGORY_AUDIO" class type.. and the "audio endpoint builder service" ignores them and does not build up the Endpoints the Sound Control panels use to populate the choices in the Playback and Recording endpoint choices.

It seems the "pins" for In or Out should also be decorated with descriptions of whether they are WaveOut or WaveIn or some other type.. but mostly the rest of the Directshow description should map to parts of the constructed Endpoints, since the Vista revised "re-conceptualizing" allowed for supporting legacy communications protocols and interfaces.

So.. its possible their devices will just work once their device drivers are revised to carry the badge of "KSCATEGORY_AUDIO"

One potential problem however are "signed" device driver packages.. since they are supposed to come with a .CAT and certificate set that ascertains the device driver has not been tampered with.

It might be possible after the device driver is installed.. to manually patch the registry to "add" the device driver/device to the "KSCATEGORY_AUDIO" by adding its guid.. if thats how it works.. but might also need to do that for its Pins.

Win7 x32 seemed easier to avoid device driver signing problems for a test.. or self signed was at one time possible with Win7 x64