Monitorama PDX 2018, in Portland, offered an intense, three-day conference program -- by and for monitoring and DevOps practitioners.
You are here
IT OpShare at Harvard Law School
Two, much-quoted Truths from William Gibson: “The future is here. It’s just not very evenly distributed.” And “The future is not Google-able.” So if we need to plan for the future, it helps to get out and talk with folks who are experiencing parts of it, already.
That’s what great technology conferences do: they enable (or cause) mashups among diverse, possibly complementary, but non-previously-overlapping worldviews.
This doesn’t always mean what marketers think it’s going to mean: i.e., collisions between classic, straw-man personas such as ‘legacy IT’ vs. ‘hair-on-fire DevOps.’ Nor does it mean learning experiences that all flow in one direction: e.g., from beards-and-beanies purveyors of the new-and-cool (small mammals) towards timid IT bean-counters (dinosaurs).
In fact, at IT OpShare 2018, held last Thursday at Harvard Law School in Cambridge, MA, there were no dinosaurs. Absolutely, there were lots of nicely-dressed IT leaders from all around the Ivy League, government, and the Boston tech corridor, many with ultralight laptops conspicuously free of stickers. But most of these folks turned out to be hip-deep in continuous delivery, container orchestration, and DevOps-for-production already. And what they were looking for, by and large, were ideas for how to pursue the agility agenda even more swiftly and radically, with greater observability, organization-wide transparency, and business/process-relevance.
Hardware Unification as a Human Factor
Sherif Hashem, Associate Director of Enterprise Infrastructure at Harvard Law School, and our host, laid it out succinctly in his opening presentation, which deftly related the evolution of phenomena like unified and cloud computing, infra-as-code, and agile software development methodology to accelerating business and improving human understanding of shared priorities, goals, and progress.
In Hashem’s IT cosmology, for example, unified computing -- fully-converged infrastructure that presumes the existence of a cloud layer above it -- helps vendors provide pre-validated designs that eliminate dependency issues (helpful). But this is secondary to enabling hardware account management that’s aware of, and responsive to total business requirements (very helpful). Scrum -- agile development methods in general -- are a way of getting more done faster by committing to sprints (helpful). But a big part of this is helping developers embrace changing requirements from the business, and helping them share back transparently about what they’re doing (very helpful, indeed).
Abstracting Away the Cloud … Or Not
Subsequent presenters amplified and elaborated these themes. Rafael Gonzalez, Senior Sales Engineer from open source container orchestration and service-discovery vendor Mesosphere, pointed out that “getting to cloud” was now a given. The higher bar was to become independent of cloud platform details by promoting cloud-like resource composability and by leveraging ever-higher orders of cloud-like automation at every level of the stack. Implement everything like a cloud, Gonzalez advised, from hardware, through infrastructure, through applications. Run all your infrastructure as code, ‘as a service,’ and keep it evergreen: running latest versions everywhere to boost security, performance, and take most efficient advantage of support.
As if to push back, Beau Christensen, Director of Platform Engineering at (Opsview partner and IT OpShare co-sponsor) VictorOps, poked a little fun at the current vogue for implementing enterprise applications as distributed, active-active, multi-master systems. His Practical Tip #1 read: “Are you sure? (Like, really sure?)” But Christensen (and his audience, who laughed knowingly in all the right places) were … well … really, really sure that this was how they wanted to build apps. So the rest of his tips were unflinchingly practical: hard-won from VictorOps’ own experience building a sophisticated as-a-service infrastructure for alert processing, collaborative problem-solving, and documentation.
Contrary to Gonzalez’ message about abstracting away differences between cloud platforms, Christensen sees practical benefit in picking (and sticking with) a cloud platform provider whose offering aligns well with your application’s architecture. A big part of this, he suggests, is that cloud-specific streaming, queueing, and database choices will influence how well you deal with the biggest problem in building multi-master application architectures: what he calls ‘The Monster’ -- the problem of representing and transferring state.
What you See Is What’s Important
VictorOps’ own business purpose, of course, is to support their infrastructure for braiding together monitor alerts, logs, and other channels of information to enable insight, action, and institutional knowledge. The need for taking such a structured approach was made abundantly clear by Thomas Wollan, NOC Manager at Norwegian MSP, TeleComputing, in his presentation titled “Monitoring as a Value-Added Service.”
Wollan shared how his organization had long suffered from inefficient application and utilization of monitoring but had successfully reinvented its approach by unifying monitoring onto a single platform (Opsview Monitor), aligning internal expectations and sharing standards, adopting automation, and making decisions about how to structure views that ensure monitoring users are always seeing what’s important. While work is still progressing, Wollan says, they’ve now moved from satisfying basic animal needs to the “Love and Belonging” middle tier of the Maslovian self-actualization-through-monitoring pyramid. Peak experience, no doubt, awaits.
Smart Components and Infra as Code
Chad Lavoie, Senior Systems Engineer at (OpShare 2018 co-sponsor) HAProxy Technologies, reminded non-developers that components of the modern cloud/container/app stack, though often characterized and used simplistically (e.g., “HAProxy is a load balancer”), are actually hugely powerful, flexible, and performant engines (“HAProxy is an Application Delivery Controller”) -- ideal for running under automation to impose rules, secure communications channels, and monitor communications-related aspects of app performance.
In a similar vein, Joshua Stenhouse, Advisory Sales Engineer at cloud data management provider Rubrik, offered a three-step process for implementing automation and consuming infrastructure as code. He recommends prioritizing (read: exclusively using) tools that implement Smartbear’s Open API Specification (OAS), which makes REST interfaces self-organizing, documenting, and discoverable. He also shared a compelling, short demo of how a PowerShell script could consume an infrastructure-as-code schema that included storage specifications, and quickly determine on behalf of a given user whether recent backups had succeeded or failed on assets for which that user was responsible.
Scripts > Clicks! No, Clicks > Scripts!
Stenhouse promotes infrastructure as code and automation as tools for moving beyond the human error, slow speed and uncertainty of manual data center operations, referring to the latter as “clicking.” In the day’s closing presentation, Opsview’s Innovation Lead, Bill Bauman, reminded us that “clicks” can also be important, since they’re a user-friendly way of invoking simple automation, conveying intention to higher-order automation mechanisms (e.g., machine learning), and eventually, for collaborating with artificial intelligence, which is robust, reliable, and adaptable because of its cognitive character. Bauman’s “post-code” vision for DevOps (and for monitoring) predicts the emergence of artificial cognition to replace the ecosystems of plugins and APIs now used as connectors between management systems and infrastructure.
Embracing Chaos and Insecurity
The real goal of IT OpShare was to promote conversation among DevOps, IT, monitoring, and automation practitioners. To this end, presentations were constrained to 15-minute time slots, and the half-day program concluded with leisurely conversation (and beer) in the elegant Pub of Harvard Law School’s Casperson Student Center. We had some fascinating chats with attendees, many of whom (as noted above) are already deeply involved in DevOps, and are evolving their IT organizations in radical directions, inspired by open source projects and cultures around organizations like Google and Netflix, including adopting their embrace of entropy, the need to engineer solutions that work despite consistent subsystem failures, and paranoid application security design patterns. We'll be bringing you interviews and profiles with several, in coming days.
More like this
So, last Friday night, I decided to turn my infrastructure into code by learning Ansible, and capture the entire demo configuration, so that, in...
Breaking down the similarities/differences between DevOps and Agile as collaborative IT efforts.