4/23/2014

HPSIM, handling Snmptraps across NAT


HP SIM is short for "Hewlett Packard - Systems Insight Manager" essentially its free "Java based" Network Manager bundled with a SQL Express or PostgreSQL database, which installs on a Windows or Linux operating system.

You can think of it as a Web portal with a database backend that discovers and then collects computer and network device information using a variety of protocols. And then provides a central place to disseminate reports, run tools, schedule tasks, and send out notifications regarding "collections" of networked computers and devices that it "manages".



In an old world view, its somewhat like an old fashioned "SNMP Network Manager" with WBEM, SSH, WSMAN, Ping and various custom protocols added on.


When you log in your presented with a traditional three pane with familiar Windows Application menu bar design. The Left Navigation pane is divided into two parts; and Upper Summary "Dashboard" and a lower "Collections" tree. The right Center pane is the main workspace for working on a "Collection".


When you first login the Center pane is focused on a special "Home" page which has a couple optional parts for "Finish the Installation" and "Did you Know?" images - which can be turned off by going to the [Menu]>[Options]>[Home page settings...] and unchecking the [x] Show "Do this now.." and [x] Show "Did you know.." -- or you can choose to always open the Home page on a collection.


Aside from installing SIM, the next thing you do is "discover" computers or network devices to "Manage" -- you do this by using the "Discovery task"

But before you can use the "Discovery task" you have to prime it, by configuring a target, and then "pulling the trigger" so to speak to launch it. It then proceeds to interrogate the "Target" and tries to profile it with various protocol tests, in such a way that it can classify and add the "Target" to one of its "Collections".

You configure the "Discovery task" by going to the [Menu]>[Options][Discovery] page and Clicking on [Edit]



That loads a additional form to the frame at the bottom where you can input more information to configure the Discovery task.



There are a lot of optional bits you can configure, but the important ones are an [ IP address ] and a [ Credential ]. Clicking on [ Credential ] lets you enter in information to authenticate a query from the SIM to the target for a particular protocol.

The default "Credential" page hides a lot of the details --  I usually click on the "hard to see"  [  Show advanced protocol credentials  ] and the tabs then usually make sense.


As a matter of history -- SNMP was one of the first Network Management protocols and uses a "Community" string for a password, originally meant for LAN management "only" it flings the password across the network in plain text. SNMPv1 was the original version, followed by SNMPv2 which attempted to secure the password better and provide access restrictions with different "views" on the end point Management Information Base (MIB or database schema). SNMPv2c was a watered down version which relaxed some requirements and became more widely adopted. SNMPv3 may be the last version which brought the most change and isn't well supported by many computers and network devices.

WBEM was an effort to bundle the same things under new "management" (sic) and brought a new Common Information Model (CIM) database schema per computer/device and a WSMAN effort to handle accessing data and performing tasks on a computer or network device.

All of these protocols potentially have different requirements for authentication or "passwords" and thats what the credentials tabs are all about, providing those credentials - so that when the Discovery task tries to use those against a target it can access the target.

Saving the changes concludes configuring the Discovery task and it can then be "scheduled" -- which I almost never do -- or "Run"

The view changes to that of "Task Results" and updates as the Discovery task progresses, informing the user of the success or failure of playing "20 questions" with the computer/device in an attempt to Identify and classify the end point, so that it can put it into a "Collection".







4/20/2014

WBEM, what's it to you?

 W eb  B ased  E nterprise  M anagment ( WBEM )

It all about a more secure way of "remotely" pulling the strings and acquiring data from your servers and network devices.

I just got through mentioning [ sfcb ] which is an IBM opensource project to produce a "small footprint cimon broker" or in other words a "light weight CIM repository" for small devices without the massive memory and disk resources of a full fledged server.

Its also the testing ground for a few other initiatives like the [ CMPI ] or "common management programming interface" project. [ CMPI ] is basically a simpler kinder introduction to writing "providers" (otherwise called "agents" in SNMP land) which go fetch and feed information into the CIM "repository" or "database".. lots of terminology "lingo and jargon" here.

So CMPI is in other words a scaffolding or framework for building "providers". A jump start of code to get programmers past the mundane writing of code required to produce an "official" version of a CIM "provider". So they can focus on just their small aspect of a provider for their device or service.

Before the Sblim-sfcb was Tog-Pegasus which was a full fledged operating system WBEM/CIM system and it still exists. But it ran into some issues getting new "providers" written.. these days its benefiting a lot by adopting the providers written to the CMPI from the Sfcb project.

Also, Sfcb is written in plain old "C" code, where as Tog-Pegasus is primarily written in modern "C++" not to be left out there are "Java" CIM projects and Python frameworks for writing "providers".

WBEM is technically a protocol, which handles getting the request from a remote management application or consumer to the provider, via the CIM. It works over http or https ports so its generally firewall "proof" meaning it can slide in under the scrutinity of many compliancy decisions.

Https allows the use of SSL certificates for both authentication and ongoing encryption of the communications after authentication.. or authentication can be a separate affair using many methods common to the protocol or devices.

From this late date in history (April 2014) the adoption of existing SNMP agents and the OID frameworks they use to map data to a MIB structure into the CIM architecture is mostly complete. So if you have existing SNMP agents, they can be mapped into CIM references which can store and forward or directly access the agents in code or via local network queries.

Why would you want to use WBEM? or CIM or WSMAN?

Mostly because SNMP while generally poorly understood and minimally deployed is inherently insecure and is usually only enabled read-only because of the risk of compromise. The terminology gap in common understanding is that community strings are essentially unencrypted passwords flung across the Ethernet or Internet in plain text, and easily sniffed and compromised. Versions 2 and 3 while sometimes deployed are often difficult to comprehend and setup properly, the option to downgrade to 2c and 1 too appealing. For these reasons and others, at least transitioning to SSL "tubing" insulated traffic from prying eyes.. and the authentication was made potentially more secure and no more difficult than setting up peer to peer certificate based authentication... and optionally username/password based.. marginally better than telnet over an open party-line.

NOTE: "minimally deployed" is a relative term based on your perspective. Over a "private" or LAN based network, it can be relatively "widely" deployed as the defacto standard for "private" networks. But the days of "private networks" even being "trusted" let alone "actually secure" are long gone.. and the generic term "network" is more often a WAN.. in which plain text community string "passwords" are easily sniffed and are often left to the default value "public" -- in which case if they are used.. are only trusted with read-only information to smaller and smaller "views" restricting their usefulness.

"Synthetic" VLANs, MPLS, Lambda networks have brought back some feelings of the private LAN security domain, but tend to be "leaky" in that as they mature, security reviews and control often are the last thing thought about when they are extended or merged.. leading to their "leakiness".. hence "Security by Decree" hasn't materialized as a workable concept in most cases. Earnest "waving of hands" and "wishful thinking".. or strategically planning a career move are somewhat more reliable.


4/18/2014

Sublime sfcb, granting access to wbem

Web management on Linux for servers and devices is pretty much the same as on Windows and elsewhere, it based on the WBEM protocol and a cimom server. It's kind of like SNMP and it maps inqueries over to those service providers too.

There are several cimom (common information management "object manager") daemons on Linux to choose from. The IBM Sublime opensource project Sblim sfcb (small-footprint, cimom broker) is one.

Installing it from # yum is simple

# yum install   --nogpg   sblim-sfcb   cim-schema   sblim-cmpi-*

But once its installed you need to grant permission to users to access the service.


This is done by adding them to a [  sfcb  ] group which isn't created by default.

# groupadd sfcb


Then create a user for the purpose

# useradd cimuser
# passwd cimuser

Then add them to the group

# usermod -G sfcb cimuser

Start the service

# service sblim-sfcb start

Test the service

#  wbemcli ecn https://cimuser:******@192.168.2.34:5989/root/cimv2 -noverify

192.168.2.34:5989/root/cimv2:Linux_OSProcess
192.168.2.34:5989/root/cimv2:CIM_Processor
192.168.2.34:5989/root/cimv2:CIM_System
192.168.2.34:5989/root/cimv2:CIM_UnixProcess
192.168.2.34:5989/root/cimv2:CIM_PhysicalPackage
192.168.2.34:5989/root/cimv2:Linux_ComputerSystem
192.168.2.34:5989/root/cimv2:CIM_PhysicalElement
192.168.2.34:5989/root/cimv2:Linux_OperatingSystem
192.168.2.34:5989/root/cimv2:SFCB_ServiceAffectsElement
192.168.2.34:5989/root/cimv2:Linux_Processor
192.168.2.34:5989/root/cimv2:CIM_OperatingSystem
192.168.2.34:5989/root/cimv2:CIM_Process
192.168.2.34:5989/root/cimv2:CIM_Card
192.168.2.34:5989/root/cimv2:CIM_ManagedSystemElement
192.168.2.34:5989/root/cimv2:CIM_SystemComponent
192.168.2.34:5989/root/cimv2:CIM_RunningOS
192.168.2.34:5989/root/cimv2:CIM_InstModification
192.168.2.34:5989/root/cimv2:CIM_LogicalDevice
192.168.2.34:5989/root/cimv2:CIM_SystemDevice
192.168.2.34:5989/root/cimv2:CIM_ComputerSystem
192.168.2.34:5989/root/cimv2:Linux_CSProcessor
192.168.2.34:5989/root/cimv2:Linux_BaseBoard
192.168.2.34:5989/root/cimv2:CIM_ProcessIndication
192.168.2.34:5989/root/cimv2:CIM_InstIndication
192.168.2.34:5989/root/cimv2:CIM_SystemPackaging
192.168.2.34:5989/root/cimv2:CIM_EnabledLogicalElement
192.168.2.34:5989/root/cimv2:CIM_ManagedElement
192.168.2.34:5989/root/cimv2:CIM_ServiceAffectsElement
192.168.2.34:5989/root/cimv2:CIM_InstCreation
192.168.2.34:5989/root/cimv2:Linux_RunningOS
192.168.2.34:5989/root/cimv2:CIM_OSProcess
192.168.2.34:5989/root/cimv2:Linux_CSBaseBoard
192.168.2.34:5989/root/cimv2:CIM_StatisticalData
192.168.2.34:5989/root/cimv2:CIM_Indication
192.168.2.34:5989/root/cimv2:CIM_ElementStatisticalData
192.168.2.34:5989/root/cimv2:CIM_Component
192.168.2.34:5989/root/cimv2:CIM_Dependency
192.168.2.34:5989/root/cimv2:Linux_UnixProcess
192.168.2.34:5989/root/cimv2:Linux_OperatingSystemStatisticalData
192.168.2.34:5989/root/cimv2:Linux_OperatingSystemStatistics
192.168.2.34:5989/root/cimv2:CIM_LogicalElement
192.168.2.34:5989/root/cimv2:CIM_ComputerSystemPackage

4/10/2014

XPS 15 9530, installing Windows 7 x64

The XPS ( X treme  P recision  S eries) series is Dells consumer/gamer version of its Precision series business/enterprise notebooks.


Normally Dell only has Window 8 driver support for it.

But I was able to install Windows 7 on it.

The XPS is also a DVD drive-less, Ethernet port-less "airbook" with only one USB 2.0 port.

My choices for installation were limited to Flash drive, or a USB2NIC dongle.


As mentioned before, I went the way of the USB2NIC dongle.

Once PXE booting was working over the dongle, I successfully used Smartdeploy to pre-install a [ platform driver pack ]

These are essentially stripped down collections of driver inf and critical driver files all packaged up into one file for easy transport.

Smartdeploy has a website where their customers can download pre-created platform packs [.ppk] to merge with custom images captured from golden virtual machines images.

The PXE boot procedure then runs Smartdeploys version of WinPE boot media and then performs a Standard SysPrep installation including the device drivers.

Smartdeploy is a brilliant piece of software, not as easy as a one dimensional [ Ghost ] imaging type system, but a truly image centric deployment system that [ manages ] drivers as a distinct and separate procedure.

I could have used Server-U or Tftpd32/64 or any other dhcp/tftp based PXE deployment system, but decided to go lowest common denominator for a Windows notebook.. therefore I used WDS.

I did have to tweak the Precision M3800 platform pack slightly, including Intel HD4600 drivers, and an NFC driver, but otherwise it was pretty straight forward. I'd offer to share the .ppk but Smartdeploy currently doesn't accept user submissions, or offer a way to share them in its Community Forum (that I am aware of..)

A couple of interesting bits.

The touchscreen does work in Windows 7, but as soon as you use it you'll probably decide to stop.. its a novelty at best. And the trackpad is perfectly functional. You can also use a bluetooth mouse. Or like me grab a Logitech Marathon Mouse with an RF USB nub.

In my opinion Windows 8 not only served as a rather abrupt and forced introduction to the touch screen on the desktop/notebook but also introduced something that "just wasn't useful"

It was years after the "mouse" was introduced before Logitech and other companies came up with a sensitive enough and dynamic use of the mouse that made sense before it really became popular. I think the mantra "Those who forget the past, are doomed to repeat it.." is especially applicable here.

The tablet in my opinion is a metaphor for the webpage, and hence its most popular Apps run in Kiosk mode and reflect a simple "touchy" limited function interface.. in other words they are "gimicky" and not really useful beyond a certain point.

Somehow thinking standard desktop apps could "adapt" and be as useful with "less" just isn't logical.

It would be like going the opposite direction from the "Ribbon" attempt at organization, and instead, removing many of the standard [File] [Edit] [View] [Options] [Window] drop downs thinking the user would have "fun" at discovering touch points on the screen. Very inexperienced.

Put simply, the keyboard was designed for minimal motion of the fingers on each hand, maximizing distinguished icons for textual language. The mouse did the same, minimum motion with distinct gestures for a "motion" language.

Touch goes the opposite direction, maximum movement for little distinguished meaning.  Its a waste of time.

Inventing "gestures" is a symbolic attempt at carving out a new portion of your brain, just for holding a new communications talent that would be used for little else. If its for a First person shooter game, and you feel compelled to survive.. it might work.. but forcing a user to learn based on guilt from an impulse purchase is likely to fail.

The other thing is this XPS 15 9530 comes with an extra mSATA disk to partner with the physical rotary hard drive. The idea is to use the Intel Rapid Start technology to boost start up and common application speed by automatically "caching" frequently accessed files to the mSATA disk.

Re-imaging breaks that, since the driver must be installed "apriori" to support the caching function.

So the deployment of the image goes straight to the 500 GB rotary hard drive.

But its not very useful anyway.

The XPS with 16 GB memory is larger than the partition on the mSATA assigned for Rapid Start use.

And most people interested in a caching function report poorly perceived start times with it enabled.

The arrangement and sizes don't quite look thought through. But treating it as a normal notebook with a normal hard drive is reasonable.

The Intel SCSI driver for managing both the rotary hard drive and the mSATA drive is distinct from the SCSI driver that come bundled with Microsoft. Upgrading it after install is "highly" not recommended unless you have a complete backup.

The reason is the drive ordering and presentation of available drives to the operating system is changed depending on which driver your using. Microsoft will load an Intel driver bundled with the operating system and present one view. Intel will load its own driver not bundled with the operating system and present a different view. [ But ] both drivers originally come from Intel, and have the exact same file names, but very different behavior.

I believe this is because the Intel driver direct from Intel for Rapid Start, "filters" or "hides" the mSATA which is enumerated at disk 0 if in the BIOS, Rapid Start is enabled. And then proceeds to boot an arbitrary portion of the rotary hard disk in concert with the mSATA disk in a pseudo "RAID0" configuration, not for fault tolerance but rather for "opportunity" to pick the higher speed copy of a file "if it is currently in the mSATA cache"

The algorithm for deciding what belongs in the cache or ages out, must be complicated and subject to being wrong, so unless you reboot frequently the cache could fill with files useless for speeding up booting and showing preference for your most frequently accessed apps.. hence an unpredictable preference scenario.. and uneven or variable speed behavior throughout the day as you move from application to application or window to window. Expectations are "king" I would rather have a smooth even performance experience rather than a "herky, jerky" experience.

So in the end, the promise of a "Speed Boot" from mSATA would become imperceptible by the average user.

Intel Rapid Technology spans more than one topic though, so I recommend you read up on it.. its an attempt to move upwards higher into the Operating system and Application management stack from the perspective of the hardware.. and tries to deliver better performance in speed and battery life. But my opinion is its rather "one-sided" and not well thought out.

My Heartbleeds, but not for thee

Big no Op

The CVE turns out is specific to a release of openssl, which didn't come out until 2012

We run mostly RHEL5 which "froze" the upstream release of openssl included in this release
[ before ] the vulnerable openssl release in 2012.

An advantage sometimes in not riding the bleeding edge... wheee

RHEL5 is by no means abandoned, it still gets regular updates and patches to the "frozen" upstream code its based on from Red Hat. We just missed the party.

So ironically none of our Production services were affected.

Doesn't mean the calls didn't come in though,

[ Is the World Ending? ]

[ Are you gonna make me change my password? ]

Yes the world is ending for some, but not for us, not today.

[ Darn, I was hoping to Party like its 1999.. ]

Sorry..

"Move along, these aren't the Droids your looking for..."


4/09/2014

Cas 3.5.2, Ehcache

More or less a note to myself that Ehcache is working now.

Again poor documentation, like where to place the configuration file for the replication, or even where to get one.

After that, how do you tell that is is replicating?

The config file say to create a /cas/ folder to store the on disk tickets, but that's relative to the tomcat5 directory and then relative to the catalina_tmpdir which is in the /etc/tomcat5/tomcat5.conf file

Finally found it.. and the ticket cache was binary.

And then a dark hint that automatic versus manual discovery of partners (multicast vs unicast) could have performance issues.. and one of the alternatives like javagroups or hastelnut? might be better. Controversial.

Moving on to Casifying Shib or Shibifying Cas to get Federation.. mostly just an exercise.. why explore Shib before knowing the technology the Federation your joining actually uses?

I think it boils down to the web SaS.. does it prefer CAS or prefer Shib.. Shib seems less common, but then there are Cloud SAML providers to consider. My head is spinning.. would really like to just get back to some java or c coding for a while.. or a little python.. the simple stuff

4/08/2014

CAS 3.5.2, Ehcache and Shib


So the exploration continues

Finally resolved why the deployment.xml file for 3.4.11 doesn't work for 3.5.2 and got it working

Mostly its that the class used for LDAP support changed in the authenticator, I didn't really have a lot of good examples and the docs and tutorials seemed to be dated or tailored towards 3.4 or lower

I get the sense 4.0 and above are still experimental. But 3.5.2 has quite a bit to offer.

The big gambit is native Java RMI Ehcache for replicating tickets in the background, so that a front end loadbalancer can direct traffic across many severs or failover to a redundant CAS server without worrying that tickets issued by a downed server are unavailable for verification. As soon as they're made they are replicated to the other servers.

The next most interesting thing for tomorrow is to look at a cas-shib plugin for supporting shib federation. We already federate quite well with Eduroam.

Also have to make some time to look at ADFS services in Windows Server 2012, not sure how far along they are.. but with .net going open source and Xamarin getting the CLR docs finally.. the area is getting interesting again.