There’s a heartbreaking vertical/horizontal dilemma when we embark on ambitious software projects. Should we focus on a small number of cherry-picked use cases and build the best version of those we know how to, or invest effort in platform facilities that will make future, as yet unforeseen, use cases radically easier to deliver?
My nicknames for these two patterns are cooking the use case and raising the floor. I’ve always had a strong bias for raising the floor, but I’ve seen it fail enough times that my self-doubt is never far from the surface.
As with any of these false dichotomies (“all models are wrong; some are useful”), sticking with either extreme is a recipe for disaster:
If we exclusively build platform, no matter how good a job we do, we can’t be confident that we fully understand, and have prioritized correctly, the use cases and requirements we’re intending to support. I always try to open up a huge use case funnel to help us test-fit particular requirements against our general-purpose plans, but in my experience, it’s very difficult to get people to participate in that process for long enough that I feel confident that our abstractions will actually support a lot of value generation. If we spend all our time preparing for possible futures, a lot of valuable stuff can fly by when we’re not ready for it.
If, instead, we go fully vertical, and cook all the use cases, we may deliver compelling demos and prototypes and functionality in a reasonable time, but we run several very real risks:
1. The magic black box antipattern, in which we make something that perfectly solves a concrete problem. If we’re lucky, and we truly, madly, deeply understand our target personas, use cases and industries, this can really work! Unfortunately, customers are not all the same, and you’re likely to hear “Wow, that’s really cool! But can it…” In general, our likelihood of solving problems that are merely very similar to customers’ actual needs but not an exact match, is very high.
2. The tech debt toxic waste plant, in which the cooked use cases don’t really generalize, and end up as a muddy ball of one-off solutions that need constant care and feeding and patching, and don’t allow us to apply leverage in solving multiple problems at once.
What is to be done? To be honest, this is the biggest dilemma of my career thus far, and I don’t have any easy answers. We did a pretty good job at Layer 7, where the core dev team was focused on building platform, and a “tactical team” of roughly equal size was aimed at solving very concrete customer problems, using the composable platform facilities wherever they applied, but being absolutely willing to build one-offs when there wasn’t a clean match between the vision and the present. It worked pretty well!
One of the dilemmas I’ve managed to resolve to my own satisfaction over the past couple years: How do we simultaneously protect the stability of our platform, without unduly burdening experimental, agile, vertical, one-off hacks? The only useful general answer I’ve got is modularity. Unfortunately, building-in modularity from early in a project can be really expensive and time-consuming, but it’s probably quite a bit more expensive to do it later.
]]>(This article is the first in a series. The next one is over here.)
I call it the Software Industry Scam: when you set out to build a modular, general-purpose platform that can be leveraged into a large number of industries and use cases, it might take awhile, but if you do a good job, you start seeing amazing ways to tackle unforeseen use cases with no architectural friction, and little to no net new code.
When you pull it off, you can see tremendous revenue leverage—a multidimensional product of the time-honored software industry benefit of near zero marginal cost of goods, times the delicious benefit of low marginal effort to conquer new use cases. Sometimes the right platform really does make what once required a team of expert programmers working for 6-18 months into a matter of configuration files, or better yet drag-and-drop, Lego assembly of modular, compatible features into custom products that a sufficiently motivated end user can glue together herself. I’ve seen it work!
OK, let’s say you’ve timed your startup well, and in the 3-5 years it inevitably takes to build a solid platform, it looks like the TAM has really started to mushroom just as your sales and marketing are unleashed. Now, how do you convince people who don’t know, or don’t believe, that your general-purpose platform components can be assembled into a solution that will solve their problem in a shockingly short time, without hiring a team of developers or expert consultants?
We had exactly this problem at Layer 7. We built a web services gateway platform. It was a unicorn: powerful, fast, really easy to use, and modular, so we could add new features incredibly quickly. But we were selling it as a web services gateway. Not as an ERP to payroll connector. Not as a REST to SOAP to JMS to FTP to transport adapter. Not as a dynamic service routing orchestration bus. Not as an API versioning layer. Not as any Foo to Bar connector, provided Foo and Bar had standards-based messaging formats. It could do all those things, but we focused our marketing on the general-purpose, abstract, architectural benefits of the solution.
The financial crisis hit, I got laid off along with half the dev team, things were touch-and-go for about a year. Then the winds of industry fashion changed, APIs were suddenly everything, and hey it turns out a web services gateway platform can be wrapped in an API management product with a little bit of effort. Same underlying technology, identical use cases at runtime, different configuration experience, brand new marketing message.
Suddenly the revenue growth starts looking like a hockey stick, the foosball table gets paved with gold, and CA got a whiff of sustained revenue. An overnight success after ten years of effort! Nearly all the founders are gone, and investors are getting bored… We have a deal! It’s rumored that CA had no idea about the power and elegance and ease-of-use of the underlying platform, they just saw a product with revenue and wanted it.
]]>I came up with the name this year, it signifies that the control plane (logical, Greek) and data plane (structural, Roman) often share many similarities, but they’re different alphabets, and there isn’t always a bidirectional, information-preserving mapping between them. There always needs to be a one-way mapping (control -> data) though!
Control Plane and Data Plane runtime components must be loosely coupled:
Control plane components can, and usually should, store configuration in a CP store. It’s OK for configuration update functionality to be unavailable sometimes, because it’s important that it be consistent.
Data plane components can, and usually should, store data in an AP store. It’s OK for data to be eventually consistent, but it must always be available.
A conductor is a service that watches for control plane events, and implements them, in the particular region of the data plane it’s responsible for. This is where the Greco-Roman part comes in: configuration entities of type alpha in the domain model may correspond to A’s in the data plane: there are often clean mappings, but they’re definitely different alphabets.
For example, a Lambda Conductor might observe CUDs of Query objects (Greek: alphas), and create/update/delete corresponding functions on AWS Lambda (Roman: A’s) in order to enact the user’s desired domain model change.
Conductors are specialized to a particular runtime column, and have intimate knowledge about its capabilities, and thus should have sovereignty about what domain model mutations are permissible. Thus, the control plane must wait for positive responses from all conductors before making a proposed domain model mutation permanent. Domain models really need to be versioned anyway, which makes this feasible to implement: the control plane can save speculative future versions of users’ work in progress without requiring quorum, and only when a user (ideally one with higher permissions than those required to edit!) decides to “make it so” is a two-phase commit required.
It’s basically impossible to coordinate a state transition across a large, complex distributed system, so let’s cheat. Once all conductors have accepted a proposed domain model update, and signaled that they’re ready to go live, the control plane should inject a synthetic message into the data plane to signal the new configuration epoch. Runtimes should only start acting on the new configuration once they observe this message.
]]>I’ve worked with a lot of product managers over the past 15 years (and I’m kinda sorta thinking about becoming one) and I’ve noticed that there are a few major categories of skills/behaviours/talents. Every one of these is critical, but in my experience, no single person is great at all four. IMNSHO, every software product team that wants to be successful needs to have someone halfway competent playing every one of these roles at least part-time.
I think I’m pretty decent at the first three of these, and not great at the last yet. What should I be reading?
]]>Now, we have the latest There was a Scala Improvement Document (Updated link) written by none other than Marc Stiegler, one of the major participants in the E effort. Coincidence?
Tangentially, Marc is also the author of Earthweb — an extremely interesting SF novel that posits a method of solving really big problems (planetary existential risks!) by harnessing the collective wisdom of huge numbers of people using microeconomic incentives.
]]>I can honestly say if someone had shown me the Programming Scala book by by Martin Odersky, Lex Spoon & Bill Venners back in 2003 I’d probably have never created Groovy.
(emphasis added)
]]>After slightly more than six years Layer 7 and I have parted company. So that means you can finally hire me, like you’ve been wanting to all these years!
]]>The JLS and the JSRs are not trade secrets, and—to my knowledge, I’m not about to go looking—most of them are not patented either. If I felt like punishing myself I have no doubt that I could spend a few years and a few million dollars re-implementing Java EE 5 and release it to the world without paying a cent to Sun or anyone else, and without worrying about a lawsuit.
Wait, what’s that? I wouldn’t be able to call it a “Java EE” implementation without paying for the conformance tests? Gee, that’s too bad. Oh well, I guess I’ll have to compete on the strength of my implementation and fabulously expensive marketing blitz.
How many people really care whether MyFaces can legally call itself an implementation of JavaServer™ Faces? If I used JSF, I would care that it implemented as much of the JSF API as I needed and was of high quality.
Similarly, I really don’t think the vast majority of developers could care less whether any particular JVM implementation is blessed by Sun, especially not when OpenJDK is available and IBM and Oracle seem happy enough to keep working on their JVMs.
What’s really important to me (and, I suspect, to most Java-ecosystem developers) is that, for the foreseeable future, there will continue to be a stable, high-performance, Java memory model and bytecode-compatible, scalable, manageable, cross-platform runtime for me to develop and deploy my products on through their expected 10+-year life cycle. That runtime really should be open source, and ideally there should be more than one compatible implementation—competition and choice are good.
I think most people would agree that we’re in pretty good shape here: all three major commercial JVM implementations are stable, performant, manageable, scalable and cross-platform, at least one of them is open source already, and the others are available free of charge (although not necessarily redistributable).
I don’t begrudge Sun their Java™ business model. They invest heavily in the Java ecosystem, have open-sourced one of the world’s most important pieces of software (in the form of OpenJDK) and should be free to dictate the terms under which their trademarks can be used. Those who care about certification can pay for it; those who don’t are already getting a whole lot of something for nothing.
Updated: Changed first sentence slightly to reflect that Stephen Colebourne is not speaking on behalf of the ASF.
]]>David R. MacIver and Martin Odersky have both been working on Just A Library versions of break and continue, which Scala otherwise lacks.
Say it with me! “When you have the right language features, you don’t need any more language features!”
]]>Scala’s flexible syntax, implicit conversions and powerful support for abstraction are extremely conducive to the implementation—as libraries—of useful software constructs and idioms that would normally need to be “baked in” to other languages.
Many of my cheeky pot shots at Project Coin are illustrative in a small way of this capability.
To illustrate further, I thought I’d start off by listing just a few of the well-liked features of other languages that have been implemented successfully as Just A Library In Scala:
Other Language | Feature | Scala |
---|---|---|
Erlang | Actor concurrency | Scala Actors |
(many) | Map Literals | Map("hi" -> true, "bye" -> false) |
(many) | List Literals | List("hi","there") or "hi" :: "there" :: Nil |
(various) | Automatic Resource Management | manage(stream) { stream.write… } |
Before you write to complain that any of the above has annoying syntax, please remember that It’s Just A Library, and if it bothers you enough, you can write your own.
]]>