Wednesday, August 11, 2021

A Final Comment on "Zywicki v. Wade"

I was going to leave this as a comment on this thread (https://crookedtimber.org/2021/08/06/zywicki-vs-wade/#comments) over at Crooked Timber, but they went and closed the comments for reasons which aren't clear. In any case, since I've already written it, might as well post it:


Tm @35 / notGoodenough @ 38 / J-D @ 41 -

Thank you all for the thorough and thoughtful replies. I recognize that there's lots of case law, custom, etc. in democratic countries which support the concept of vaccine mandates. Tradition is tradition, the law is the law, and I'm not arguing explicitly against either.

Rather, what I wrote was in response to Lee A. Arnold's early comments (@3 and @4) and Tm's subsequent response (@24) which I believe are arguing for a much broader, more general principle (hence my question @11 regarding limits). In response to my question

Should people be obligated to subject themselves to any medical procedure provided the benefit to society is high enough?
Tm says:
This is really not hard to answer: in a liberal society, a requirement is justifiable if it is necessary for the protection of society and does not place an undue burden on the individual.

Here are a couple of points about this exchange which are relevant to your various critiques:

  • I framed the question in terms of "obligation" i.e. is a person morally obliged to take a positive action (such as getting vaccinated) if specific criteria are met. This is not really about whether someone is going to get turned away from the gym; its about whether people should (in the normative sense) submit to a medical procedure.
  • The "balancing" framing was introduced by Tm. Justification consists of establishing that a requirement 1) is necessary for the protection of society and 2) does not place an undue burden on the individual. I acknowledge that I'm assuming this test includes a balancing element, that there's a requirement of proportionality between the level of protection provided and the burden on the individual. This needn't necessarily be the case.

Now, regarding specific comments that aren't addressed by the observations above:

  • I'm still thinking through vaccination as a job requirement in the case where such thing aren't clearly spelled out ahead of time. If your employer tells you prior to accepting the job that you'll need to get vaccinated, all good.
  • I support abortion on demand, and think that existing restrictions are bullshit. Tm, its interesting that you bring up abortion, since I think using it as an example helps illustrate my stance. A person seeking an abortion is making a decision for themselves about how a set of (possibly) subjective/incommensurable criteria affect their personal wellbeing. On the other hand, society/government/public health authorities have absolutely no business making that decision for anyone.
  • "If you insist that the burden of proof for supporting a vaccine mandate requires calculation that the benefits to society outweigh any burdens, then to oppose a vaccine mandate must surely require calculation that the burdens outweigh the benefits to society?" I had to stop and think awhile about this objection, but I'm going to hold my ground because I think that in some ways this is the heart of the issue. I treat it as axiomatic that a preference for bodily autonomy is the (defeasible) default choice for such situations. I'm a little surprised I'm having to raise that here at CT; isn't that basically the crux of the (moral) argument underpinning the right to abortion?
  • "We are not talking about Zwyicki’s autonomy with respect to vaccination". No, really, we are. I don't know Zwyicki, I don't care about Zwyicki, but a lot of commenters here seem to be pretty certain that Zwyicki's behavior is endangering people. Given the rapidity of the judgement, and the certainty of the verdict, it should be really easy for people to show their work. This was actually part of what prompted me to ask the "limiting principles" question earlier, because it really looks like people are assuming that any incremental protection provided to society is sufficient to require him to get vaccinated.

This paragraph deserves to be quoted in its entirety:

A government should have absolutely no input on the weighing of benefits and burdens in a society? If we were only considering the potential harm to Zwyicki, this might be more reasonable – but we are considering the potential harm by Zwyicki to others. A government (at least in theory) can have access to a vast body of resources, data, and expertise – including some of the most talented and empirically consistently correct people working in relevant fields (such as epidemiology and medicine). While I am certainly not blind to how frequently public health is overridden by political and economic considerations, however incompetent you believe the government may be in exercising such a function I’d be intrigued at how you came to the conclusion that an individual (with far less access to relevant data and knowledge) is better placed to make such evaluations.
notGoodenough, explain to me how you have not just reiterated the government's position from Buck v. Bell? And now I'm going to rant for a few sentences: For the love of god and all that is holy, how can you possibly hold this position given who just left office? Why the hell should I give any public entity that much deference when fucking Cheeto Mussolini might get reelected in 2024? Don't give more power to your best friend than you're willing to give to your worst enemy.

Maybe I should have said this up front, instead of burying the lede, but you have to hold the line somewhere. Government necessarily needs to do cost/benefit analyses all the time; I'm down with that. But I draw the line at giving society/government/public health authorities the power to compel medical procedures, because that power has historically lead to abuses. I was hoping that the wise heads could talk me out of that position in the case of vaccination, but so far I'm not convinced.

Wednesday, August 21, 2019

Distributing Razor Events Using Pub/Sub (3/N)

So yeah, it seems that all ESB systems are complicated to some degree. Having struck out with WSo2 I did a brief survey of other FOSS ESB offerings and decided to take a stab at Apache ServiceMix. So far, so good.

ServiceMix is a bundle of a few different Apache projects:

  • Karaf, a Java runtime that houses the other bits.
  • ActiveMQ, the reliable messaging service.
  • Camel, which provides the routing functionality.
  • CXF, which is used for building web services.
I'm finding it to be significantly easier to deal with than WSo2. That's not entirely the fault of WSo2, since a lot of the subject matter background that I had to slog through for WSo2 ports directly to ServiceMix. Had I tried ServiceMix first I might have voiced the same complaint, but in reverse. That said, however, ServiceMix still feels like an easier system to work with for a number of reasons:
  • Unified package: The router and broker components are housed together in a single runtime. With WSo2 I had to jump back-and-forth between two different interfaces and two sets of logs.
  • Lightweight/modular: Has a lot of available components, but most of them are (sensibly) disabled by default. This leads to quicker startup and a smaller footprint.
  • Command-line oriented: Provides a text shell that lets you monitor and configure most aspects of the system. This compares favorably with the web interfaces used by WSo2.
  • Robust documentation: There's good documentation available online and there are a number of decent books on the components which make up ServiceMix. Additionally, I get the impression based on Google searches that the user community is more robust.

So let's try the same scenario, getting the bare skeleton of a pub-sub system set up. First off, download ServiceMix and start it up:

wgt https://www-us.apache.org/dist/servicemix/servicemix-7/7.0.1/apache-servicemix-7.0.1.zip
unzip apache-servicemix-7.0.1.zip
apache-servicemix-7.0.1/bin/servicemix
You should get a startup message and then be dumped into the Karaf console:
karaf@root>

Alright then, let's get started. There should be an ActiveMQ instance lurking about, and https://activemq.apache.org/osgi-integration.html tells me I can see its status via the bstat command:

karaf@root>bstat
BrokerName = amq-broker
TotalEnqueueCount = 1
TotalDequeueCount = 0
TotalMessageCount = 0
TotalConsumerCount = 0
Uptime = 36.659 seconds

Name = KahaDBPersistenceAdapter[xxxxx]

connectorName = openwire
Great, so the broker instance is operational. Unlike WSo2, its not strictly necessary to pre-configure pub-sub topics as they'll be auto-created as necessary. IMHO that's bad operational hygiene, as its likely to lead to an accumulation of misspelled and obsolete topics, so I'm going to pre-define a topic by editing ./etc/activemq.xml as described in https://svn.apache.org/repos/infra/websites/production/activemq/content/configure-startup-destinations.html:
<destinations>
    <topic physicalName="RAZOR" />
</destinations>
Restart ServiceMix, and now bstat shows:
karaf@root>bstat
BrokerName = amq-broker
...
Name = RAZOR
destinationName = RAZOR
destinationType = Topic
EnqueueCount = 0
DequeueCount = 0
ConsumerCount = 0
DispatchCount = 0
...
Looking good.

We've got a message broker, we've got a pub-sub topic, now let's set up some stub pub-sub consumers. At this point we're going to stop working with the ActiveMQ portion of the system and start working with the Camel portion. Here's the XML that sets up two consumers which do nothing but log, based on the example from https://servicemix.apache.org/docs/7.x/quickstart/camel.html:

<blueprint
    xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
      http://www.osgi.org/xmlns/blueprint/v1.0.0
      http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd">

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
      <route>
        <from uri="activemq:topic:RAZOR"/>
        <log message="RAZOR topic consumer #1"/>
        <to uri="mock:topicsink" />
      </route>
      <route>
        <from uri="activemq:topic:RAZOR"/>
        <log message="RAZOR topic consumer #2"/>
        <to uri="mock:topicsink" />
      </route>
    </camelContext>

</blueprint>
The blueprint and camelContext tags are boilerplate; all the action happens in the route blocks. Here's how those are interpreted:
  • Create a route that consumes messages from the RAZOR topic of the ActiveMQ instance, writes a log message "RAZOR topic consumer #1", and then sends the message to the mock (i.e. fake test fixture) destination topicsink.
  • Rinse and repeat, this time logging the message "RAZOR topic consumer #2".
Create a file with the above content and then copy into the deploy directory. ServiceMix should notice the new file in a few seconds and then configure the routes. If all has gone well then there should now be two registered consumers for the RAZOR topic:
karaf@root>bstat
...
Name = RAZOR
destinationName = RAZOR
destinationType = Topic
EnqueueCount = 0
DequeueCount = 0
ConsumerCount = 2
DispatchCount = 0
...

Let's test it out. The ServiceMix console provides the activemq:producer command for sending messages to the ActiveMQ instance:

karaf@root>activemq:producer --destination topic://RAZOR --message 'test' --user smx --password smx --messageCount 1
karaf@root>log:display -n 2
2019-08-20 14:03:03,627 | INFO  | sConsumer[RAZOR] | route3                           | 43 - org.apache.camel.camel-core - 2.16.5 | RAZOR topic consumer #1
2019-08-20 14:03:03,627 | INFO  | sConsumer[RAZOR] | route4                           | 43 - org.apache.camel.camel-core - 2.16.5 | RAZOR topic consumer #2
The messages show up in the log as expected. A couple of comments:
  • The user name and password for connecting the ActiveMQ are defined by activemq.jms.user and activemq.jms.password in ./etc/system.properties.
  • The log:display command is pretty nifty; it lets you view the system log from the shell instead of hunting it down in the file system.

So we've verified that the consumers are working. The last thing I want to do in this post is set up a simple REST interface for receiving messages destined for the RAZOR topic. Things get a little confusing here, as ServiceMix has a plethora of options for creating RESTful services:

  • CXF: A full-featured service framework.
  • REST Swagger: Creates RESTful services using Swagger specifications.
  • REST: Allows definition of REST endpoints and provides REST transport for other components.
  • RESTlet: Yet another REST endpoint implementation.
A non-trivial amount of doc-reading led me to conclude that the REST component was best-suited to my needs, since all I really want is for ServiceMix to listen for inbound HTTP POSTs and then pass the payload on to ActiveMQ.

Additional complexity is added by the fact that some of the options above can be provided by one of several software components. For example, according to the docs, REST consumer functionality is provided by any of: camel-coap, camel-netty-http, camel-jetty, camel-restlet, camel-servlet, camel-spark-rest, or camel-undertow. One of these needs to be installed and enabled in the ServiceMix runtime before the REST directive can be used. I chose camel-restlet for no particular reason:

karaf@root>feature:install camel-restlet
karaf@root>feature:list | grep camel-restlet
camel-restlet                           | 2.16.5           | x        | Started     | camel-2.16.5                |

And now, the augmented XML:

<blueprint
    xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="
      http://www.osgi.org/xmlns/blueprint/v1.0.0
      http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd">

    <camelContext xmlns="http://camel.apache.org/schema/blueprint">
      <restConfiguration bindingMode="auto" component="restlet" port="8080" />
      <route>
        <from uri="rest:post:razor" />
        <inOnly uri="activemq:topic:RAZOR" />
      </route>
      <route>
        <from uri="activemq:topic:RAZOR"/>
        <log message="RAZOR topic consumer #1"/>
        <to uri="mock:topicsink" />
      </route>
      <route>
        <from uri="activemq:topic:RAZOR"/>
        <log message="RAZOR topic consumer #2"/>
        <to uri="mock:topicsink" />
      </route>
    </camelContext>

</blueprint>
Here's how this works:
  • The restConfiguration tag indicates which software component (restlet, which we just installed) will be providing REST endpoint services and that the REST service should be provided on port 8080.
  • ServiceMix should listen for POST requests to /razor and route them to the RAZOR ActiveMQ topic. The use of inOnly in that routing rule indicates that no reply should be expected.
Copy the updated XML into the deploy directory. Once ServiceMix has picked up the new configuration there should now be something listening on port 8080:
$ netstat -an | grep -i listen | grep 8080
tcp46      0      0  *.8080                 *.*                    LISTEN

Alright, lets send a message!

$ curl --data 'message' http://localhost:8080/razor
and
karaf@root>log:display -n 2
2019-08-21 14:28:52,720 | INFO  | sConsumer[RAZOR] | route3                           | 43 - org.apache.camel.camel-core - 2.16.5 | RAZOR topic consumer #2
2019-08-21 14:28:52,720 | INFO  | sConsumer[RAZOR] | route2                           | 43 - org.apache.camel.camel-core - 2.16.5 | RAZOR topic consumer #1
Boom!

Having reached the point where we can submit a message via REST we'll call it a day. Observations so far:

  • ServiceMix isn't exactly simple, but it's less complicated than WSo2.
  • I'm particularly enamored of Camel's XML DSL; it seems very well designed for succinctly setting up routing rules. Putting the basic pub-sub skeleton together was a matter of a few route blocks. Compare this with WSo2, which required me to set up 3 different proxy services to accomplish the same effect.
When we pick up we'll do the integration work needed to catch Razor events and send them to ServiceMix.

Distributing Razor Events Using Pub/Sub (2/N)

You know how everyone talks about enterprise busses, but you've never met anyone who actually has one in production? Its because they're hella complicated. I mean, don't get me wrong, the documentation for WSo2 Enterprise Integrator is very good. But even knowing exactly what I want to accomplish it still feels like I've been handed a box of legos without any assembly instructions.

The core of pub-sub messaging in WSo2 is the Message Broker, which provides all the necessary configuration and message-schlepping functionality. However, the Message Broker only speaks JMS, which means that in most cases you need to wrap it in translation layers in order for it to talk to other systems. These translation layers are provided by the Enterprise Service Bus (ESB), which implements the Messaging Gateway pattern using something called a proxy service.

I'll stop here to add that figuring out everything in the previous paragraph took a non-trivial amount of time.

So in this post we're going to do the following:

  • Configure the Message Broker for pub-sub
  • Configure the ESB to talk to the Message Broker
  • Set up a couple of mock proxy services to act as message consumers.
  • Set up a proxy service to provide HTTP -> JMS message translation.

Configuring the Message Broker

Previously, we installed all the WSo2 components on our Razor server. Log into this server and fire up the Message Broker:

[root@razor ~]# wso2ei-6.5.0-broker
...
[2019-08-09 10:37:27,334] [EI-Broker]  INFO {org.wso2.carbon.core.internal.StartupFinalizerServiceComponent} -  WSO2 Carbon started in 40 sec
[2019-08-09 10:37:27,591] [EI-Broker]  INFO {org.wso2.carbon.ui.internal.CarbonUIServiceComponent} -  Mgt Console URL  : https://192.168.15.254:9446/carbon/
Note that any log messages generated by the broker will show up in this window. In order to make the admin interface accessible from the host you'll need to set up a port forwarding rule:
VBoxManage natnetwork modify   --netname natnet1 --port-forward-4 'broker:tcp:[127.0.0.1]:9446:[192.168.15.254]:9446'

So, navigate to https://localhost:9443/ and log in using the default credentials (admin/admin), which will take you to the main interface for the broker. Pub-sub systems are ordered around the concept of topics, a collection of publication channels organized hierarchically in some sort of semantically meaningful fashion. Our initial task is to set up a topic hierarchy that makes sense for the Razor use case.

Razor emits various kinds of events, so it makes sense for the topic hierarchy to mirror this scheme:

razor
+ node-registered
+ node-bound-to-policy
+ node-unbound-from-policy
+ node-deleted
+ node-booted
+ node-facts-changed
+ node-install-finished
In each case use the "Add" link under "Topics" to create the appropriate topic; you'll need to swap out '_' for '-' in topic names since the broker doesn't seem to like '-'. For the sake of experimentation make both subscription and publication available to everyone; in the real world you'd probably want to set up separate roles for the producers and consumers of Razor events.

Configuringing the ESB to talk to the Message Broker

Next up on the agenda is to tell the ESB to use the Message Broker as its source/sink for JMS-based communications. This process is covered in gory detail at https://docs.wso2.com/display/EI650/Configure+with+the+Broker+Profile, and amount to:

  • Edit /usr/lib64/wso2/wso2ei/6.5.0/conf/axis2/axis2.xml:
    • Search for 'EI Broker Profile', and uncomment the transportReceiver block immediately following.
    • Ensure that the transportSender block with name jms is uncommented.
    This tells ESB that its going to be talking to a WSo2 message broker for JMS (as opposed to some other JMS implementation).
  • Edit /usr/lib64/wso2/wso2ei/6.5.0/conf/jndi.properties:
    • Ensure that the TopicConnectionFactory URL, which provides the address, protocol, and credentials for the Message Broker connection, is correct. The default value of amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5675' should work unless you've messed with the broker's credentials or interface bindings.
    • Create a topic entry for the razor topic: topic.razor = razor.

Configuring mock message consumers

We don't yet know which systems are going to eventually end up consuming Razor messages, so to get started we're going to set up a couple of fake consumers which log that they've received a message and then drop it on the ground. Start the ESB:

[root@razor ~]# wso2ei-6.5.0-integrator
...
[2019-08-09 11:41:58,272] [EI-Core]  INFO - StartupFinalizerServiceComponent WSO2 Carbon started in 39 sec
[2019-08-09 11:41:58,644] [EI-Core]  INFO - CarbonUIServiceComponent Mgt Console URL  : https://192.168.15.254:9443/carbon/
As with the broker, keep an eye on this window for log messages originating from the ESB. Add an appropriate port forwarding rule:
VBoxManage natnetwork modify   --netname natnet1 --port-forward-4 'esb:tcp:[127.0.0.1]:9443:[192.168.15.254]:9443'
and then navigate to https://localhost:9443/, which will take you to the ESB interface. Credentials are the same as for the broker, admin/admin.

What we're going to do at this point is add a couple of proxy services via Services -> Add -> Proxy Service. We'll do this by creating a "Custom Proxy", switching to "source view", and then pasting in a modified version of the XML available at https://docs.wso2.com/display/EI650/Publish-Subscribe+with+JMS#Publish-SubscribewithJMS-Configuringthesubscribers. Here's what I arrived at after a little bit of tinkering:

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="RazorEventSubscriber1"
       startOnLoad="true"
       statistics="disable"
       trace="disable"
       transports="jms">
   <target>
      <inSequence>
         <property name="OUT_ONLY" value="true"/>
         <log level="custom">
            <property name="Subscriber1" value="I am Subscriber1"/>
         </log>
         <drop/>
      </inSequence>
      <outSequence>
         <send/>
      </outSequence>
   </target>
   <parameter name="transport.jms.DestinationType">topic</parameter>
   <parameter name="transport.jms.Destination">razor</parameter>
   <parameter name="transport.jms.ConnectionFactory">myTopicConnectionFactory</parameter>
   <description/>
</proxy>

So what does this all mean? Here's my stab at how the above is being interpreted, based on https://docs.wso2.com/display/EI650/Using+the+ESB+as+a+JMS+Consumer#UsingtheESBasaJMSConsumer-One-waymessaging:

  • Create a proxy service named RazorEventSubscriber1 which starts automatically and listens on the JMS transport.
    • The specifics of how to connect to JMS are provided by the myTopicConnectionFactory paramter of the JMS transportReceiver block in axis2.xml (which we uncommented in the previous section).
    • Listen on the channel razor, which is a pub-sub topic channel.
  • When processing inbound messages:
    • Do not expect a reply, as denoted by OUT_ONLY being set to true
    • Log the key value pair Subscriber1/I am Subscriberl on receipt of a message.
    • Drop the message.
  • Outbound processing (return of a reply to the sender of the inbound message) just sends on the message unmodified. I believe this block should never be triggered because OUT_ONLY is set to true
You see what I mean about a box of legos. I'd really like documentation that describes the available blocks, their options, and so on, but I haven't been able to find it if such a thing exists.

You should see some activity as soon as you paste the XML above into the source view of ESB. The broker console should have messages indicating that the ESB has created a subscription to the razor topic:

[2019-08-07 14:34:06,457] [EI-Broker]  INFO {org.wso2.andes.kernel.AndesChannel} -  Channel created (ID: 127.0.0.1:63378)
[2019-08-07 14:34:06,596] [EI-Broker]  INFO {org.wso2.andes.kernel.AndesContextInformationManager} -  Queue Created: AMQP_Topic_razor_NODE:localhost/127.0.0.1
[2019-08-07 14:34:06,598] [EI-Broker]  INFO {org.wso2.andes.kernel.AndesContextInformationManager} -  Binding Created: [Binding]E=amq.topic/Q=AMQP_Topic_razor_NODE:localhost/127.0.0.1/RK=razor/D=false/EX=true
[2019-08-07 14:34:06,659] [EI-Broker]  INFO {org.wso2.andes.kernel.subscription.AndesSubscriptionManager} -  Add Local subscription AMQP subscriptionId=d506a4d7-6d1f-4c51-b294-f19004b04ff8,storageQueue=AMQP_Topic_razor_NODE:localhost/127.0.0.1,protocolType=AMQP,isActive=true,connection= [ connectedIP=/127.0.0.1:63378/1,connectedNode=NODE:localhost/127.0.0.1,protocolChannelID=b6cbb77f-2d95-4e7a-bfba-b8bd9ceb21f1 ]
Additionally, if you navigate to the topic subscription list on the broker (https://localhost:9446/carbon/subscriptions/topic_subscriptions_list.jsp?region=region1&item=topic_subscriptions) you should see a non-durable subscription corresponding to the proxy which was just configured.

Provided that all worked as expected you should rinse and repeat the proxy creation process, changing the string Subscriber1 to Subscriber2 in the associated XML to differentiate the two proxy services.

Ok, let's make sure this part of the plumbing is working. Navigate to https://localhost:9446/carbon/topics/topic_manage.jsp and, in the "Publish" section set the topic to razor and the text message to "foo", then click "Publish". The ESB console should come back with

[2019-08-09 12:52:40,904] [EI-Core]  INFO - LogMediator Subscriber2 = I am Subscriber2
[2019-08-09 12:52:40,904] [EI-Core]  INFO - LogMediator Subscriber1 = I am Subscriber1
We have successfully published a message (using the broker management interface) and verified that it was received by both of the consumers.

Create an HTTP -> JMS proxy

We've finished the backend (sort of, for the moment), now its time to work on the frontend. Again, we're going to create a proxy, but this one is going to take HTTP and translate it to JMS. Here's what I ended up with after modifying the inbound proxy from https://docs.wso2.com/display/EI650/Publish-Subscribe+with+JMS#Publish-SubscribewithJMS-Configuringthepublisher:

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="RazorEventBusProxy"
       startOnLoad="true"
       statistics="disable"
       trace="disable"
       transports="http">
   <target>
      <inSequence>
         <property name="OUT_ONLY" value="true"/>
         <property name="FORCE_SC_ACCEPTED" scope="axis2" value="true"/>
      </inSequence>
      <outSequence>
         <send/>
      </outSequence>
      <endpoint>
         <address uri="jms:/razor?transport.jms.ConnectionFactoryJNDIName=TopicConnectionFactory&java.naming.factory.initial=org.wso2.andes.jndi.PropertiesFileInitialContextFactory&java.naming.provider.url=conf/jndi.properties"/>
      </endpoint>
   </target>
   <description/>
</proxy>
Here's what's going on:
  • Create a proxy service named RazorEventBusProxy that starts automatically and listens on HTTP.
  • When processing an inbound message:
    • Do not expect a reply.
    • Automatically return a 202 to the HTTP client, as denoted by FORCE_SC_ACCEPTED being set to true. This is apparently the appropriate pattern for a fire-and-forget API.
  • The endpoint for these operations is the razor topic of the JMS connection defined by the TopicConnectionFactory entry in the conf/jndi.properties file. Messages should be forwarded to this endpoint with an implicit translation from HTTP to JMS.
I had a little bit of an issue with the format of the URI. https://docs.wso2.com/display/EI650/Using+the+ESB+as+a+JMS+Producer suggests that I should just be able to set it to jms:/razor?transport.jms.ConnectionFactory=TopicConnectionFactory, but that resulted in
[2019-08-09 13:42:04,618] [EI-Core] ERROR - Axis2Sender Unexpected error during sending message out
java.lang.NullPointerException
 at javax.naming.NameImpl.(NameImpl.java:283)
 at javax.naming.CompositeName.(CompositeName.java:231)
so I used the longer, uglier version above.

Ok, let's test out the front end. The address of the service, which can be obtained by navigating to https://localhost:9443/carbon/service-mgt/service_info.jsp?serviceName=RazorEventBusProxy, is http://razor.localdomain:8280/services/RazorEventBusProxy. So, set up an appropriate port forwarding rule:

VBoxManage natnetwork modify   --netname natnet1 --port-forward-4 'services:tcp:[127.0.0.1]:8280:[192.168.15.254]:8280'
And send a message:
curl -H 'Content-Type: application/xml' --data '' http://localhost:8280/services/RazorEventBusProxy
The log messages from the consumers should show up as usual:
[2019-08-09 13:45:14,889] [EI-Core]  INFO - LogMediator Subscriber1 = I am Subscriber1
[2019-08-09 13:45:14,890] [EI-Core]  INFO - LogMediator Subscriber2 = I am Subscriber2
The astute reader will notice that I changed the message payload. Seems like ESB really wants valid XML; setting the body to foo makes ESB sad:
[2019-08-09 13:59:05,759] [EI-Core] ERROR - RelayUtils Error while building Passthrough stream
org.apache.axiom.om.OMException: com.ctc.wstx.exc.WstxUnexpectedCharException: Unexpected character 'f' (code 102) in prolog; expected '<'
 at [row,col {unknown-source}]: [1,1]
There's probably some way to change that behavior, but I've not been able to determine how just yet.

That concludes our initial foray into the world of WSo2. It's not for the faint of heart, but it shows promise in being able to solve some interesting problems. When we pick up next time we'll look at getting Razor to talk to the inbound proxy.

...

Or maybe not... when I picked this back up I started getting the following error when sending messages to the proxy service:

[2019-08-15 10:06:49,970] [EI-Broker]  WARN {org.wso2.andes.server.AMQChannel} -  MESSAGE DISCARDED: No routes for message - Message[(HC:346717419 ID:0 Ref:0)]: 0; ref count: 0
There has been some sort of breakdown between the proxy and the message broker; the broker is receiving messages but then is failing to route them to the subscribed listeners. I have not, despite much poking, been able to determine why this behavior is occurring, which puts an end to this exercise.

If I feel so inclined I may try this again with a different ESB system; it'll be interesting to see whether they're all as complicated to configure as WSo2.

Distributing Razor Events Using Pub/Sub (1/N)

One of the avenues left unexplored during our tinkerings with Razor was how the various events it emits might be put to good use. It knows a whole bunch of interesting things about how hardware is configured, which IPs are associated with which hostnames, etc., information which can profitably be used throughout an IT organization. So let's see if we can figure out a good way to capture and distribute this information.

Consider that Razor emits a single message for any particular event, but that message might be of interest to multiple systems. This need for message duplication/broadcast points in the direction of a publish/subscribe system of some kind. Google tells me that there are a couple of ways to go in regards to technology selection:

  • There are some light(er)-weight systems like Kafka, Pulsar, etc. which support the pub/sub paradigm, but seem to require a lot of coding to make different pieces talk to each other.
  • Fairly heavy-weight Enterprise Service Bus (ESB) systems which have kitchen-sink functionality and don't (necessarily) require a lot of code.
The latter seems more appropriate for this situation, since I'm not developing an application from scratch but rather trying to integrate a bunch of third-party systems.

So, which ESB to use then? There are a bunch of them, and they all seem to be fairly complicated, so its hard to tell at this point which might be the best. I'm inclined to try WSo2, mostly on the basis that they claim to have all functionality available 100% free of charge. Could just be propaganda, we'll see.

So, what I'd like to do for this experiment is as follows:

  1. Set up a pub-sub system.
  2. Hook up Razor as a message source.
  3. Hook up a couple of message consumers, one of which will be some sort of persistent store and one of which will maybe be an inventory or IP management system.

Just to keep things simple I'll be using the same testbed that I used for the original mucking about with Razor. WSo2 will be installed on the Razor server (not recommended for production), which strongly suggests upping the available RAM on that VM from 1G to 4G. Power up the VM and then

wget https://product-dist.wso2.com/downloads/enterprise-integrator/6.5.0/downloader/wso2ei-linux-installer-x64-6.5.0.rpm
rpm -Uvh wso2ei-linux-installer-x64-6.5.0.rpm
If all goes well you should see a message on console about the various ways of invoking different WSo2 components.

We'll leave it at that for now. Next time we'll start getting things set up in earnest.

Friday, June 28, 2019

Bare Metal Management With Razor (4/N)

Having digressed into issues of gender and Twitter censorship, let's get back to talking about Razor.

I'm pretty happy with the system as a whole. It's easy to set up and easy to understand, and I appreciate that they've put together a bunch of off-the-shelf components in a way that facilitates extension.

The experimental install that I documented (1, 2, 3) would need significant work to support 24x7 operations. Things that would need to be done:

  • Redundant Razor servers. This is easy enough to accomplish, since the server itself is stateless. Just build a couple and hide them behind a VIP.
  • HA Postgress DB. The current Postgres docs list a number of different solutions which are supported to varying degrees.
  • HA DHCP. A little Googling suggests that Dnsmasq isn't awesome at failover; the recommendation seems to be to use ISC DHCP instead because it has a built-in failover protocol.

There's also the question of how you handle multiple LANs. DHCP requests, by design, are confined to a single broadcast domain. If you want Razor to be able to handle request for hosts across multiple broadcast domains then you need to overcome this limitation. Typically this is accomplished via DHCP relay, the details of which will vary depending on what DHCP server you're using.

Lastly, there's the question of IP management and DNS. If you're imaging systems, giving them names, and assigning them IP addresses semi-permanently, you'd like a system that's aware of the fact and then does the right things: updates DNS records, stops offering assigned IPs via DHCP, and so on. Dnsmasq doesn't do anything in this regard, so in a real world setting you'd want a smarter piece of software handling DNS and DHCP. ISC has a system called Kea that is intended for this use case, though I wasn't aware that it even existed until writing this post.

Anyway, in conclusion: Razor is pretty awesome, supports the bare metal management use case better than other systems I've looked at, and does OS imaging pretty well too. Its not quite as robust out-of-the-box as Foreman or MAAS in terms of things like distributed operation and IP management, but that can be overcome with a little bit of additional work.

Who Could Have Foreseen?

On the face if it, its silly that Twitter blocked David Neiwert's account. But...

THIS IS EXACTLY WHAT WE SAID WOULD HAPPEN!!!

Please don't misunderstand me, I really like David's writing. But we told all y'all over and over and over again that there was no way for Twitter (or Facebook or YouTube or...) to make nuanced judgments at scale about who/what should be blocked, that calling for certain content to be blocked would inevitably lead to other stuff getting swept up as well, and that it would be better to just forget the whole thing.

So you will find my sympathies somewhat reduced when what has come to pass is exactly what we predicted would come to pass. Innocuous content getting blocked is a known side-effect of content moderation regimes, regimes which I expect that a lot of the people carping about David's treatment actively support.

So maybe now that the "wrong people" are getting caught up in it we can all just step back and reconsider then entire concept, yes?

Tuesday, June 25, 2019

Is Sex Or Gender Assigned At Birth

This started out as a footnote to my previous post, but it turns out that there's enough meat on the bone to merit its own discussion. So, what is assigned at birth, sex or gender?

Consider the following ritual as if you're an anthropologist studying a foreign culture:

  1. A baby is born.
  2. The baby is examined by a designated baby examiner.
  3. The designated baby examiner pronounces that the baby is "X".
The complication that arises in interpreting this ritual is that X can refer to sex or gender class membership, but its not immediately clear which is intended.

Further investigation reveals several additional facts:

Let's consider the hypothesis that X refers to gender. The implication is that the baby examination procedure is fundamentally mistaken, and that generations of baby examiners around the world have been rendering gender judgements on the basis of irrelevant information. It certainly wouldn't be the first time that a cultural ritual has been found to be totally baseless, so history tells us that we shouldn't eliminate it out of hand.

How about the opposite hypothesis, that X refers to sex? This interpretation dovetails nicely with what we know: Baby examiners are collecting information about sex via a proxy (appearance of external genitalia) which is known to be reliable, and then making declarations on the basis of that information.

Both of the above interpretations are plausible, but which one is more plausible? I will submit that the "sex" hypothesis is consistent with the observed behavior, and thus is more likely to be true than the "gender" hypothesis, which is not consistent with observed behavior.

The obvious follow-ons are then "How does gender happen?" and "Why is gender correlated with sex?". Sex is assigned at birth and that gender is subsequently constructed on the basis of sex. Again, this process nicely explains observed behavior while potential alternatives do not.

Are You Still On Your Default Name?

Overhead in one of our office Slack channels:

trans people don't say "did you assume my gender"
we say "nice gender did your mom pick it"
and
"extremely funny from someone on their default name"
These were mentions, not uses, but given the speaker I believe it's safe to treat these pronouncements as capturing a certain strand of thought. Its not a sentiment that I recall having been exposed to before and so is worth writing down.

Regarding the first phrase, apparently its a meme (so minus points for originality on the part of the speaker?). The initial read on this is that its a pithy restatement of the idea that gender is assigned at birth. However, the phrase also echoes (deliberately, I assume) "Nice shirt, did your mom pick it?", the implication by analogy being that identifying as the gender you were assigned at birth is not fashionable/stylish.

The "default name" quote doesn't appear to be a meme or anything like that. The implication in the statement is that using the name you were given at birth shows... a lack of reflection, maybe? Or, again, style? In any case, it's grounds for questioning their judgement.

Taken together the quotes above express a certain... aesthetic sensibility, maybe? I find the comment about "default name" to be annoying on the grounds that I don't think names are particularly expressive. There's a big switching cost associated with changing your name and not a whole lot of benefit (outside of certain situations)... but maybe that's the point? Could name changing be a form of costly signaling?

Critiques of (not) choosing your gender, on the other hand, have some facial plausibility, but a lot really rides on the interpretation of "choose". Is "choosing" merely engaging in behaviors not typically associated with your gender, or is "choosing" only choosing if you make some sort of public declaration?

Consider: I've written elsewhere that in some of the places I've lived I've been well outside the mainstream in terms of gender presentation. At the same time, however, I've never identified as gender-nonconformant or trans or anything of that nature and think that it would stretch those terms beyond meaning were I to do so. My point is that I was deviating from expected behavior, so could be said to be "choosing my gender" in that sense. At the same time, however, I never publically identified as anything other than the gender associated with the sex I was assigned at birth, in which case that it can be argued that I wasn't making any sort of choice.

Anyhow, interesting phenomena, worth tucking away for future consideration.

Sunday, June 16, 2019

When Orders Collide

So, what happens when first-order and second-order arguments get entangled? You get a mess, that's what.

Justin Weinberg, in his recent post Trans Women and Philosphy: Learning from Recent Events, says the following in the section "Some final notes":

Please avoid first-order discussion of trans-inclusive and trans-exclusionary arguments or arguments about bathroom or prison policies and the like; I’m not interested in hosting those disputes in the comments on this post.
Which, in his defense, seems like a reasonable rule if you're trying to have a second-order discussion. However! I then recalled the following bit from the original piece by t philospher:
My gender is not up for debate. I am a woman. Any trans discourse that does not proceed from this initial assumption — that trans people are the gender that they say they are — is oppressive, regressive, and harmful.
This is followed by a "call to action" which states that contrary views should not be published, spoken, or otherwise given a platform.

It seems to me that t philosopher's position effectively couples first-order and second-order concerns, and that this is a major contributing factor to why discussions of some trans-identity-related issues have proven intractable.

Publication and speaking are the tools which philosphers have traditionally used to investigate first order problems. I publish a paper saying "X is bad", someone else publishes a rebuttal "No, X is good", someone else chimes in with "No, you're both wrong", etc. t philospher asserts that these traditional tools should be restricted, and justifies that restriction based on a first-order consideration. Which, seemingly inevitably, leads to the situation where one can't discuss which tools are appropriate without bringing up and examining first-order concerns.

I would really like to have seen Justin grapple with this aspect of the discussion more. Specifically, what are the fallbacks if part of the traditional philosophic tool suite has been proscribed? Presumably he wants to see philosophy continue as a going concern, which would seem to necessitate some viable, alternative approach.

Now, in my last post I mentioned that I'd had an epiphany. The epiphany is: This problem isn't confined to philosophy.

You can see variants on the dilemma above playing out in other places. "It's not my job to educate you", for example, is primarily a request for people to engage in self-education. But it also has the side-effect of removing a useful tool from the toolkit, specifically the ability to identify a particular individual's opinion on some topic.

More generally, any assertion that investigative tools should be limited on the basis of first-order concerns is almost certainly going to cause problems if those same tools are needed to validate the underlying concern. Having restated the problem like that, it starts to look an awful lot like a form of epistemic closure: The tools need to validate a belief are forbidden as a consequence of that same belief, thus the belief itself becomes immune to correction.

Yeah, Justin definitely needs to address that: How do we prevent t philosopher's position from leading to epistemic closure?

Interpreting And Taking Action On Requests To "Feel X"

Feelings are endogenous; that seems to be the upshot of the many and varied exhortations that we stop telling people how they should feel. No one can make you feel a particular way; at best they can create an environment conducive to a certain feeling. So if some group of people requests to feel a certain way, how is that request to be interpreted and acted upon?

Take, as a covenient and (presumably) non-contentious example, recent requests by La Salle Univerity for improved physical safety. Per the article, students are feeling a "sense of fear", and have asked administrators to implement "better security". How should the administration respond?

One way to proceed is to take steps to ensure that students actually are safe. You identify relevant measures of safety, whatever they may be, and then do whatever is needed to improve them (if necessary). At some point, presuming you execute well, students will be safe in the relevant sense. Now, setting aside the specific facts regarding La Salle (since we're not here to argue that case), suppose that the students come back and say that they still don't feel safe and that the administration needs to do more?

Let's stop and note that there are a couple of assumptions lurking in the background of the administration's initial response:

  • The administrators and students share an understanding of what it means to "be safe".
  • There's a correlation between "feeling safe" and "being safe".

What's interesting here is that there's both a normative/semantic component (shared definition) and an empiric component (correlation between feeling and being). Disagreements can arise when either assumption fails.

Were I in the administration's shoes I would tackle the empiric assumption on the hopes that its more tractable. A conversation of the form "Here's why we think you're safe."/"Here's why we still feel unsafe." might break in a few ways:

  • Students are persuaded they're safe.
  • Administrators are persuaded that more genuinely needs to be done.
  • Discussion reveals a shared understanding of the notion of "safety", but students and administrators cannot reach a consensus on the empirical question.
  • Discussion reveals a lack of shared understand of the notion of "safety".

Its easier to deal with empirical disagreements than normative disagreements. I'm possibly naive, but its seems like if you have a defensible case then making an executive decision is justified (and probably a foregone conclusion). Students gonna student and all that jazz; your life won't be easy, but thats what administrators are paid to do.

Lack of shared understanding seems like it could be a minefield, especially if the topic is more contentious than simply physical safety. I know I wouldn't want to be responsible for asking people to elaborate their beliefs in the era of "It's not my job to educate you". If asking questions is precluded then the alternative seems to pretty much be deference, at which point you're going to have a bunch of people razzing you for caving in.

Nothing discussed above is unique to educational settings; the same sort of dynamic is in play whenever there's a request that one group ensure that another group feels a certain way. And let's stop right there, I think I just had a minor epiphany. Rather than bury the lede I'll take that up in my next post.

But Wait! I Thought Everyone Rejected the Repugnant Conclusion?

Just a reminder that total happiness utilitarians are not a bogeyman; they actually exist in the wild. To wit, Torbjörn Tännsjö says:

The crucial thing is not how many people are living right now, but the sum total of happiness. Perhaps we should be fewer now to be able to go on for millions of years. Some people have a theory that we’re perhaps too many right now, and I don’t object to that. The idea is that we should be as many as possible at each point in time and go on for as long as possible. The rationale behind this is the idea that we should maximize the sum total of happiness.

That is all, you may now go about your business.

Bare Metal Management With Razor (3/N)

We got our PXE systems up and running on the Razor Microkernel. Next step is to image them!

Imaging with Razor is a two-step process:

  1. Define some tags to classify systems.
  2. Define some policies to image systems on the basis of their tags.

Step 1: Tags

Razor "tags" are essentially a rule-based system for classifying machines. You set up rules ahead of time, and then Razor automatically tags systems as they are discovered. For example, this rule says that any system with <2G RAM should get the 'small' tag:

[root@razor log]# razor create-tag --name small --rule '["<", ["num", ["fact", "memorysize_mb"]], 2048]'
From http://localhost:8150/api/collections/tags/small:

      name: small
      rule: ["<", ["num", ["fact", "memorysize_mb"]], 2048]
     nodes: 0
  policies: 0
   command: http://localhost:8150/api/collections/commands/1
The next time the 1G VM checks in it will have the small tag applied:
[root@razor log]# razor nodes
From http://localhost:8150/api/collections/nodes:

+-------+-------------------+--------+--------+----------------+
| name  | dhcp_mac          | tags   | policy | metadata count |
+-------+-------------------+--------+--------+----------------+
| node1 | 08:00:27:0c:fd:f4 | small  | ---    | 0              |
+-------+-------------------+--------+--------+----------------+
| node2 | 08:00:27:43:84:1d | (none) | ---    | 0              |
+-------+-------------------+--------+--------+----------------+
...
Similarly, we can define a rule that tag all systems with >2G as 'large':
[root@razor log]# razor create-tag --name large --rule '[">", ["num", ["fact", "memorysize_mb"]], 2048]'
From http://localhost:8150/api/collections/tags/large:

      name: large
      rule: [">", ["num", ["fact", "memorysize_mb"]], 2048]
     nodes: 0
  policies: 0
   command: http://localhost:8150/api/collections/commands/2
and then, the next time the 4G node checks in...
[root@razor log]# razor nodes
From http://localhost:8150/api/collections/nodes:

+-------+-------------------+-------+--------+----------------+
| name  | dhcp_mac          | tags  | policy | metadata count |
+-------+-------------------+-------+--------+----------------+
| node1 | 08:00:27:0c:fd:f4 | small | ---    | 0              |
+-------+-------------------+-------+--------+----------------+
| node2 | 08:00:27:43:84:1d | large | ---    | 0              |
+-------+-------------------+-------+--------+----------------+
...
Pretty neat, huh?

One minor shortcoming of Razor is that you can't arbitrarily tag a set of servers; tag application is entirely rule-based. This adds a little bit of complication to the common use case of "I got this bunch of servers I just brought online and I know exactly what I want to use them for". You can fake that functionality using the 'in' operator and a list of MAC addresses:

razor create-tag --name my-set-of-servers \
  --rule '["in", ["fact", "macaddress"], "de:ea:db:ee:f0:00", "de:ea:db:ee:f0:01"]'
This seems like its a popular use case, as later editions of Razor introduced the has_macaddress and has_macaddress_like operators to support this type of rule.

Step 2: Policies

The second half of setting up Razor for system imaging is to define policies, which tell Razor what to install and how it should be installed. Policies are triggered via tag matching, which automatically applies the appropriate policy to machines with a particular set of tags. For purposes of this demonstration lets assume that we want to install Ubuntu on small nodes and Centos on large nodes.

Before we can create a policy we need to identify a few things:

  • What collection of bits will be used to image the systems?
  • What are the mechanics for actually laying the bits down?
  • How will the handoff from Razor to a configuration management system be handled?
Bullets one and two are handled via the creation of a repository. The basic form of the command is
razor create-repo --name  --task  [ --iso-url | --url ] 

One choice which needs to be made at this point is whether the Razor server is going to serve up the bits directly or simply point systems to another location. If you select --iso-url the Razor server will download the ISO and unpack it; make sure you have ample free disk space. --url will cause the Razor server to point to the specified address rather than serving up the content directly.

The other thing you need to do is specify a task, which provides Razor with the instructions on how to bootstrap the automated installation process. Task creation is somewhat involved and not for the faint-of-heart but, thankfully, Razor comes with a bunch of pre-defined tasks for common operating systems:

[root@razor ~]# razor tasks
From http://localhost:8150/api/collections/tasks:

+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
| name            | description                                                    | base    | boot_seq                             |
+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
| centos          | CentOS Generic Installer                                       | redhat  | 1: boot_install, default: boot_local |
+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
...
+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
| windows/8pro    | Microsoft Windows 8 Professional                               | windows | 1: boot_wim, default: boot_local     |
+-----------------+----------------------------------------------------------------+---------+--------------------------------------+
...

So let's set up repos for CentOS 7 and Ubuntu Xenial, since there are pre-defined tasks for both of those:

[root@razor ~]# razor create-repo --name centos-7 --task centos/7 --iso-url http://centos.s.uw.edu/centos/7.6.1810/isos/x86_64/CentOS-7-x86_64-DVD-1810.iso
From http://localhost:8150/api/collections/repos/centos-7:

     name: centos-7
  iso_url: http://centos.s.uw.edu/centos/7.6.1810/isos/x86_64/CentOS-7-x86_64-DVD-1810.iso
      url: ---
     task: centos/7
  command: http://localhost:8150/api/collections/commands/6

[root@razor ~]# razor create-repo --name ubuntu-xenial --task ubuntu/xenial --iso-url http://releases.ubuntu.com/16.04/ubuntu-16.04.6-server-amd64.iso
From http://localhost:8150/api/collections/repos/ubuntu-xenial:

     name: ubuntu-xenial
  iso_url: http://releases.ubuntu.com/16.04/ubuntu-16.04.6-server-amd64.iso
      url: ---
     task: ubuntu/xenial
  command: http://localhost:8150/api/collections/commands/11
We now have two repos, centos-7 and ubuntu-xenial, that can be referenced in policies. The razor server will download and unpack the associated ISOs in the background.

The other item we have to consider for a policy is the Razor → configuration management system handoff. Razor handles this by means of brokers, and supports several popular configuration management systems (namely Puppet and Chef) out of the box (see razor create-broker --help for a complete listing). Additionally, if you want to integrate with a different system like Salt or Ansible Razor allows you to write your own brokers.

I'm going to keep things simple and just create a no-op broker:

[root@razor ~]# razor create-broker --name=noop --broker-type=noop
From http://localhost:8150/api/collections/brokers/noop:

           name: noop
    broker_type: noop
  configuration: {}
       policies: 0
        command: http://localhost:8150/api/collections/commands/3
This type of broker doesn't try to do any sort of hand off; it's basically just a placeholder.

Alright, we've got repositories and a broker, let's create the policies:

razor create-policy --name small-nodes --repo ubuntu-xenial --broker noop --tag small --hostname 'ubuntu${id}.localdomain' --root-password not_secure
From http://localhost:8150/api/collections/policies/small-nodes:

       name: small-nodes
       repo: ubuntu-xenial
       task: ubuntu/xenial
     broker: noop
    enabled: true
  max_count:
       tags: small
      nodes: 0
    command: http://localhost:8150/api/collections/commands/12
The policy 'small-nodes' will install Ubuntu Xenial on any node with the 'small' tag. The host will be named according to its ID and have the specified root password. Doing it again for CentOS:
[root@razor etc]# razor create-policy --name large-nodes --repo centos-7 --broker noop --tag large --hostname 'centos${id}.localdomain' --root-password not_secure
From http://localhost:8150/api/collections/policies/large-nodes:

       name: large-nodes
       repo: centos-7
       task: centos/7
     broker: noop
    enabled: true
  max_count:
       tags: large
      nodes: 0
    command: http://localhost:8150/api/collections/commands/13
Same deal for the most part: Nodes with the tag 'large' will get CentOS 7 and the associated hostname.

No additional steps are needed to kick off imaging. The next time either host checks in it will have the policy applied and will start the appropriate imaging process. For example:

[root@razor ~]# razor nodes
From http://localhost:8150/api/collections/nodes:

+-------+-------------------+-------+-------------+----------------+
| name  | dhcp_mac          | tags  | policy      | metadata count |
+-------+-------------------+-------+-------------+----------------+
| node1 | 08:00:27:0c:fd:f4 | small | small-nodes | 0              |
+-------+-------------------+-------+-------------+----------------+
| node2 | 08:00:27:43:84:1d | large | ---         | 0              |
+-------+-------------------+-------+-------------+----------------+
...
node1 has completed its scheduled check-in and has had the small-nodes policy applied. If you're watching the system on console it should reboot and go into the Ubuntu installation process.

When initially working through this process I got

ipxe no configuration methods succeeded
FATAL:  INT18:  BOOT FAILURE
on console. Per the suggestion at http://ipxe.org/err/040ee1 a hard reboot temporarily solved the problem. A permanent fix, at least in the case of VirtualBox, is to disable the "Enable I/O APIC" feature for the VM.

No further intervention should be required at this point; both VMs should come up with the appropriate operating systems and host names. Here's what I ended up with:

[root@razor etc]# razor nodes
From http://localhost:8150/api/collections/nodes:

+-------+-------------------+-------+-------------+----------------+
| name  | dhcp_mac          | tags  | policy      | metadata count |
+-------+-------------------+-------+-------------+----------------+
| node1 | 08:00:27:0c:fd:f4 | small | small-nodes | 1              |
+-------+-------------------+-------+-------------+----------------+
| node2 | 08:00:27:43:84:1d | large | large-nodes | 1              |
+-------+-------------------+-------+-------------+----------------+
Note that, in addition to listing a policy, the table also shows that both VMs have some metadata defined now. Let's see what it is:
[root@razor etc]# razor nodes node1
From http://localhost:8150/api/collections/nodes/node1:

          name: node1
      dhcp_mac: 08:00:27:0c:fd:f4
         state:
                     installed: small-nodes
                  installed_at: 2019-06-07T13:39:30-07:00
                         stage: boot_local
        policy: small-nodes
  last_checkin: 2019-06-07T13:13:21-07:00
      metadata:
                  ip: 192.168.15.74
          tags: small
...
In this case the metadata lists the IP assigned to the host.

And that's what imaging with Razor looks like, modulo some configuration management stuff that I decided to elide. That's it for the present; I expect that I'll write up one more post with some concluding thoughts in the near future.

Bare Metal Management With Razor (2/N)

Having set up our Razor environment, it's now time to put it through its paces. The first order of business is to get a VM or two up and registered with the system.

Start by creating two VMs, making the following configuration tweaks:

  1. Configure them to use the natnet1 NAT network.
  2. Enable network boot, and put it first in boot order.
  3. Give one VM 1G of RAM and another 4G of RAM.
Power on the 1G VM and watch its console. It should PXE boot off of the Razor/Dnsmasq infrastructure and eventually come up at a login prompt; default creds are root/thincrust.

We haven't installed an OS yet, so what is the VM running? It's running the Razor Microkernel, "a small, in-memory Linux kernel that is used by the Razor Server for dynamic, real-time discovery and inventory of the nodes that the Razor Server is managing". It's entirely ephemeral, no disk needed. And, better yet, the docs on how to modify the kernel for your own needs are pretty good. So if you want to, say, include tools for updating system firmware, or running tests/benchmarks, or anything of that nature, its not terribly hard to do.

Alright, on with the show! On the Razor server, do:

gem install razor-client
and then run razor nodes to see what's in inventory:
[root@razor log]# razor nodes
From http://localhost:8150/api/collections/nodes:

+-------+-------------------+--------+--------+----------------+
| name  | dhcp_mac          | tags   | policy | metadata count |
+-------+-------------------+--------+--------+----------------+
| node1 | 08:00:27:0c:fd:f4 | (none) | ---    | 0              |
+-------+-------------------+--------+--------+----------------+
...
There you have it... there is no step three. Alright Razor, tell me about node1:
[root@razor log]# razor nodes node1
From http://localhost:8150/api/collections/nodes/node1:

          name: node1
      dhcp_mac: 08:00:27:0c:fd:f4
         state:
                  installed: false
        policy: ---
  last_checkin: 2019-06-05T10:09:34-07:00
      metadata: ---
          tags: (none)
...
So far, so good. What sort of "facts" does the server know about the node?
[root@razor log]# razor nodes node1 facts
From http://localhost:8150/api/collections/nodes/node1:

          network_enp0s3: 192.168.15.0
              network_lo: 127.0.0.0
           system_uptime:
                            seconds: 731
                              hours: 0
                               days: 0
                             uptime: 0:12 hours
...
The server has recorded the system configuration and some other relevant items, like IP address and uptime. Now power up the 4G VM; it should do the same thing.

At this point we've achieved our first goal, which was to find a way to quickly boot up a bare metal system and allow us to do some poking around prior to making decision about OS installation. And now, a little digression.

I first got interested in bare metal management a bazillion years ago when I was working for a system integrator. We had to be able to build, inventory, and test racks of servers as efficiently as possible. Having that sort of capability is useful not only for system integrators, but anyone who has to deal with hardware at scale. As a by-product of its operation, Razor persists a bunch of readily-accessible information about hardware configuration:

[root@razor ~]# su - postgres
Last login: Fri Jun  7 10:17:54 PDT 2019 on pts/0
-bash-4.2$ psql -c 'select name, hw_info, facts from nodes' razor_prd postgres
 name  |  hw_info                                                                                         | facts
 node2 | {fact_boot_type=pcbios,mac=08-00-27-43-84-1d,serial=0,uuid=55c8c240-9c29-43cc-ab69-c55720067fa4} | {"network_enp0s3":"192.168.15.0", ...
 node1 | {fact_boot_type=pcbios,mac=08-00-27-0c-fd-f4,serial=0,uuid=f3b91803-383d-43a7-bb76-97f995cd4118} | {"network_enp0s3":"192.168.15.0", ...
(2 rows)
facts contains a JSON blob with a bunch of useful information like MAC addresses, serial numbers, disk devices, etc., suitable for post-processing/transmission to a system of record.

And we'll leave it at that for now. Next time we'll pick up with the second objective, doing some automated OS installs.

Tuesday, June 11, 2019

Bare Metal Management With Razor (1/N)

Last episode, I spent a little time messing around with Foreman, and eventually came to the conclusion that it's not quite the tool that I was looking for. Foreman wants a lot of fairly involved configuration up front, and (based on my limited experimentation) wants you to have a good idea what you're going to do with hardware ahead of time. Many other candidate systems (see my list) seem to operate under a similar paradigm. What I really want is something that will let me painlessly boot up machines and do basic hardware work (inventory/diagnostics/configuration) before making any decisions about if/how to image them.

One tool which stands out from the crowd in this regard is Razor. It provides a microkernel and some interesting PXE capabilities which let you get things up and running while deferring decisions about imaging to a later date. So it seems like a good candidate to experiment with further.

Start by building the same same base VM we used for Foreman, with the exception it only needs 1G of RAM.

Razor makes use of Postgres for data persistence, so we'll need to get that up and running as well. Here are some instruction for CentOS 7:

yum install -y postgresql-server postgresql-contrib
postgresql-setup initdb
systemctl start postgresql
And then the Razor-specific setup:
su - postgres
createuser -P razor
createdb -O razor razor_prd
The snippet above creates a user named razor and a DB named razor_prd owned by this user. This concludes the basic configuration of the Postgres DB; schema creation will follow in a bit.

Moving on, we need to install the Razor server itself. Again, here's a distillation of the official instructions:

yum install -y http://yum.puppetlabs.com/puppetlabs-release-pc1-el-7.noarch.rpm
yum install -y razor-server
So far, so good. Next we need to set up the DB schema, using the tools provided by the Razor package:
su - razor -c 'razor-admin -e production migrate-database'
The first time I did this I got
Sequel::DatabaseConnectionError: Java::OrgPostgresqlUtil::PSQLException: FATAL: Ident authentication failed for user "razor"
which indicates that something is wrong with the auth configuration for the Postgres DB. After a little Googling I found this post, which provided a fix. If you get the above error, open pg_hba.conf, wherever it may reside, and change the line which reads
host    all             all             127.0.0.1/32           ident
to
host    all             all             127.0.0.1/32           trust
The observant reader will note that we never set a password for the razor DB user. By default Razor expects to just be able to access the DB without a password, so the above change accommodates this requirement by making the DB trust any connection originating from localhost.

Alrighty, we should be all set. Fire up the server:

service razor-server start
Disable the firewall:
systemctl disable firewalld
iptables -F
And, on your host system, add a port forwarding rule to reach the Razor web interface:
VBoxManage natnetwork modify --netname natnet1 --port-forward-4 'razor:tcp:[127.0.0.1]:8150:[192.168.15.254]:8150'

Now, if you navigate to http://127.0.0.1:8150/api, you should get back a bunch of JSON showing the available server commands. This tells you that the Razor server is up and running and talking to the Postgres DB. This concludes installation of the Razor server proper, but there's still some work to be done to get the PXE infrastructure deployed.

First, a handful of bootstrappy things need to be put in various locations; don't think too hard about this part unless you really, really want to know the gory details. Grab the latest microkernel and put it in the appropriate location:

yum install -y wget
wget http://pup.pt/razor-microkernel-latest
tar -C /opt/puppetlabs/server/data/razor-server/repo -xf razor-microkernel-latest
Ditto the PXE script for UNDI systems:
wget -O /var/lib/tftpboot/undionly.kpxe http://boot.ipxe.org/undionly.kpxe
Add a line to /etc/hosts which will allow Razor to generate a bootstrap script:
192.168.15.254 razor.localdomain razor
and then call the Razor API to generate it:
wget -O /var/lib/tftpboot/bootstrap.ipxe http://razor.localdomain:8150/api/microkernel/bootstrap
This concludes the mindless copying of the aforementioned bootstrappy things... back to the interesting bits.

Now here's a bit of a complication that we haven't had to deal with before. 'razor.localdomain' gets embedded into the bootstrap script, which means it needs to be resolvable by client systems. Usually, when experimenting, you can hack around this by adding appropriate entries to /etc/hosts, but since there's no equivalent to /etc/hosts in the PXE environment that won't work. Instead, razor.localdomain will have to be genuinely resolvable via DNS, which which means we have to stand up some sort of DNS server.

I don't want to set up BIND, or any of the other enterprise-grade servers, for something as simple as providing DNS service for a single subnet. The PXE/DHCP/TFTP setup docs for Razor provide info on configuring Dnsmasq which, incidentally, can also be used for DNS:

Dnsmasq is a lightweight, easy to configure DNS forwarder, designed to provide DNS (and optionally DHCP and TFTP) services to a small-scale network. It can serve the names of local machines which are not in the global DNS.

Dnsmasq is an example of super awesome design. It has a bunch of really smart defaults, like reading records from /etc/hosts and setting up server forwarding on the basis of /etc/resolv.conf. It basically just Does The Right Thing™. So let's get Dnsmasq installed and configured:

yum install -y dnsmasq
Create a file /etc/dnsmasq.conf and paste in the configuration from the Razor docs:
dhcp-match=IPXEBOOT,175
dhcp-boot=net:IPXEBOOT,bootstrap.ipxe
dhcp-boot=undionly.kpxe
# TFTP setup
enable-tftp
tftp-root=/var/lib/tftpboot
We also need to specify the network configuration for DHCP:
dhcp-range=enp0s3,192.168.15.2,192.168.15.253,4h
dhcp-option=3,192.168.15.1
The first line says that requests received from internface enp0s3 will get IPs in the range of 192.168.15.2 - 192.168.15.253 with a lease time of 4 hours. The second line sets the default gateway to 192.168.15.1.

That concludes configuration of Dnsmaq. Once that's done, start it up:

service dnsmasq start

Ok, how are we looking?

[root@razor ~]# dig @127.0.0.1 razor.localdomain | grep -v '^;'

razor.localdomain.	0	IN	A	192.168.15.254
Bueno!

Alright, to recap what we did, since this was more involved than usual:

  1. Set up Postgres, and create a DB and user for use by Razor.
  2. Install Razor, and then use the provided utilities to set up the DB schema.
  3. Put the materials in place to support PXE boot.
  4. Install and configure Dnsmasq, which will provide PXE/DHCP, TFTP, and DNS services for our tiny little subnet.

Next time we'll use this collection of infrastructure to PXE boot a couple of VMs.

Monday, April 29, 2019

Using Foreman For Bare Metal Provisioning (2/N)

We've got a testbed for Foreman in place, so let's see what it takes to get it up and running. I'll be following the CentOS 7 instructions from the Quickstart Guide.

First, some preliminaries:

  • Make sure that you have enough RAM allocated to the VM. I tried to do this with a 1G VM (the default for 64-bit RedHat under VirtualBox) and got OOM errors.
  • The Foreman installer wants the FQDN (as returned by facter fqdn) to match the output of hostname -f. An easy way to do that for the purpose of experimentation is to edit /etc/hosts and replace the existing loopback entry with 127.0.0.1 foreman.localdomain foreman.
  • Ensure that the host firewall is off: systemctl disable firewalld; iptables -F.

Installation is very easy:

# yum -y install https://yum.puppetlabs.com/puppet5/puppet5-release-el-7.noarch.rpm
...
# yum -y install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
...
# yum -y install https://yum.theforeman.org/releases/1.20/el7/x86_64/foreman-release.rpm
...
# yum -y install foreman-installer
...
# foreman-installer
...
  Success!
  * Foreman is running at https://foreman.localdomain
      Initial credentials are admin / 8pqUnEHJ2znCcUVC
  * Foreman Proxy is running at https://foreman.localdomain:8443
  * Puppetmaster is running at port 8140
  The full log is at /var/log/foreman-installer/foreman.log
The Foreman team should get credit for doing a good job automating the process. One thing I noticed immediately, and which explains the care taken with automating installation, is that Foreman is fairly complex. The following services are running on the VM post-installation:
  • Puppet server
  • Postgres
  • Apache
  • Passenger
Non-trivial, to say the least.

Having reviewed the Foreman manual, especially section 4.4 on Provisioning, it also seems like just getting a client to PXE-boot is very involved (see "4.5.2 Success Story" for required CLI commands), and this subsequent client handling is Puppet-centric.

Honestly, at this point I just want to have a client come up and register itself with a server, maybe get a description of what hardware is available and that sort of thing. I'd really prefer to defer decisions about operating systems and CM systems until after that. Foreman seems like its too involved for the moment; I'm not rendering a permanent judgement on it yet, but I do want to set it down and see what else is out there.

Blog Information Profile for gg00