Wednesday, October 19, 2011

Agile Software Development - debate?

There has been a significant upsurge of discussions around the "agile" software development approach (as ill-defined as that may be). A backlash of sorts… Not surprising - I have yet to meet a software development approach that does not go through the typical cycle of discovery, dismissal, early adoption, enthusiastic adoption, boredom, discussion, rejection, obsolescence. The proponents of these approaches will happily quote Gandhi at various stages in the cycle: "First they ignore you, then they laugh at you, then they fight you, and then you win".

But we are talking about development approaches, which offer the interesting couple of challenges:
  • the perfect solution costs nothing, presents no risk, and requires no revision
  • the perfect solution solves the problems as stated, as understood by all stakeholders when stated and later, as discovered by new stakeholders after the delivery
And this results in the following challenge to any approach: what is the true measure of success it is being judged against?

I have had my share of software development responsibilities, as a developer, as an architect, as a development manager and even as a product manager. I will state the obvious - it's understanding and managing what you are measuring success against that counts, not how you get there. It's so easy to get bogged down into discussing the fine details of the processes and deliverables of a software development approach, and not understand what its motivation is, what it attempts to solve, and how to measure its effectiveness. I have seen more times than I care to count the adoption of a formal approach presented as the solution to problems not understood, invariably with dismal results, leading to the adoption of new approaches with the same lack of success.

Software development is difficult - it is not reducible to pure "mechanical" processes because the amount of creativity involved in it is such at this stage of its maturity that the "human" aspects play a huge role in its implementation. It's impossible to consider software developers as essentially interchangeable for any tasks of consequence. It's impossible to ignore human psychology when assigning tasks, problem solving, emergency triage, etc... It's impossible to set aside the fact that requirements do evolve, that customers do discover new needs, that they want them satisfied quickly or they move on, etc.

Recognizing this, and incorporating it in the approach, is what has made the "agile" approaches ("agile" is umbrella term for so many variations) valuable. There is nothing magical there, and nothing that was not done in many places before. I distill it down to two essential motivations:
  • it's easy to lose focus of the true objectives, and the objectives vary frequently, leading to disasters: let's make the objectives those that ultimately matter, keep them transparent, manageable in size and in time, and let's make sure they are reviewed frequently
  • it's easy to forget about human dynamics and the social aspects of software development, leading to disasters: let's have effective coordination, let's have common sense, participant-adapted processes and metrics, let's foster collaboration across developers and functions
Keep the focus on your transparent measurable and evolving objectives. Take into account the social aspects of software development. All the processes, deliverables, and recipes in the "agile" approaches are there to support these two key motivations.

Why do I say all this? Simply because the debates on the details of which "agile" method is better than others are noise. The key questions are: are these two motivations important to you? Do you recognize in them the purpose of removing problems you are facing? If so, any of the "agile" methods, adapted to your realities will fit the bill.

Decision Management is connected to all this. Its key goal is to infuse applications with the ability to respond in an agile way to regulatory, market and eco-system changes. The key question is then whether the approach to develop and manage the decisions is truly agile, and how you do assess that.

Carole-Ann has started writing about it, and will be soon presenting at RulesFest 2011 on this very subject.

Interesting stuff indeed.






Sunday, February 21, 2010

The importance of business analyst interfaces

Carole-Ann wrote a blog about what she considers to be the #1 pitfall in the implementation of proper Decision Management applications. Her observations are based on a vast experience of multiple implementations of such applications in multiple vertical domains.

I fully agree with Carole-Ann’s position highlighting the importance of getting the user interface for business analysts well defined and implemented, with their workflows in mind, and with special attention paid to their concerns.

I have been in a position to review multiple implementations, made with both the products I have been responsible for as well as their competitors, and, in a fairly significant number of cases, what I have reviewed has amounted to little more than direct translations of implementation concepts, disconnected from the concepts and workflows of the business analysts. Some of these implementations are a consequence that the tools used insist in presenting very low level interfaces – typically, single ‘if-then-else” rules built by point and click – or a single representation – typically, a complex table-based interface, or a tree representation. But others leverage tools that can do much more, yet they fall down to low level implementations that end up leading to significant frustrations by the business users, and, worst, a breakdown of their workflow, and their ability to actually manage the decisions, contrary to the promise of the Decision Management approach.

This is serious, and I will go as far as Carole-Ann on this – the ultimate success of the discipline hinges on getting this right.

What are the two key issues I have seen in these failed implementations?

Lack of consideration for the concepts and representations used by the business analysts themselves

Business analysts do think in terms of policies, constraints, pricing structures, etc… They do not think in terms of “if-then-else” rules, or “event-condition-action”. Business analysts need to be involved in the decision on which representation to use from the onset: they should go to the whiteboard, and, with no prompting from the actual business rules or decision management implementation specialists, specify how they see their concepts, policies and workflow. More frequently than not, they will use a combination of textual form and graphical representations, and fairly well defined workflows. The role of the implementation specialist will have to be then to select the right representations in the tools to reflect as well as possible the representations and flows used most frequently.

I really cannot understate the importance of this approach. One of the key tenants of the Decision Management approach put forward in the early days and that I still believe in after all these years is that the business analyst must be able to understand the decisions implemented, and the implementation specialist must understand what the changes to the decisions need to be. Ideally, the business analyst can implement the changes directly – but even that requires the implementation specialists and the business analysts to have a proper common ground to effectively defined and enforce the boundaries of what can be done that way.

There is one key consequence of this approach, and that is that there is no way a single representation approach will be sufficient to elicit, implement and manage decisions through their lifecycle. I know that there are whole companies out there pushing to the front their graphical or textual representation as the solution to all decision logic, but on this problem, they fail. Yes, I do not doubt that they can express all possible decision logic – but that’s not the question: the question is whether they can do it in a way that is efficient for the business analyst and that will scale through their workflow and the evolution of decisions.

Let’s take a few examples:
  • I am very familiar with a tool that mostly uses decision trees as the way to express all decision logic, including all potential initial states and all potential outcomes. No surprise there – the decision trees in question grow to the thousands, and tens of thousands of nodes, becoming unusable. Patch solutions, like tree simplification algorithms, are not an answer: the issue is that the tree metaphor is not adapted for all types of decision logic. The internet is full of references to studies that demonstrate that.
  • Another tool uses a table like metaphor as its only representation for decision logic. Again, we hit a similar scalability problem, and for different technical reasons as why the decision trees don’t scale. Tables are awfully poor at representing disjointed logic, and even worse at handling exceptions. But on the other hand, there are many, many, cases in which a single decision is in fact a multi-step decision with a number of exceptions. The implementations end up with a large number of tables that are difficult to navigate through, with close-to-empty tables representing exceptions parallel to huge tables mixing multiple steps in the decision, with tables with extreme density variability (sections with a lot of content, and then sections mostly empty where the user needs to be very good at finding the relevant information), etc…
  • Others used exclusively semi-formal languages. These present the advantage that they are closer to the way the policies would be documented in textual form, but lose the significant synthesizing power of metaphors. Explaining a scorecard in text is incredibly cumbersome compared to what can be done in a real scorecard representation.

Lack of consideration to the availability of context in the business analysts interfaces


This is a constant concern of mine, and something not that many business interfaces do provide: availability of context directly where the business analyst is working. Carole-Ann refers to a huge aspect of this when she focuses on the navigation issues in the user interfaces.

What the business analyst is doing when she/he is manipulating a particular representation is contextual – and to do it right, she/he needs access, directly there, and in the most up to the point form, to the overall context in order to do her/his work right.

Take the example of working on a decision step in a credit card originations decision that refers to the fact that a given customer is a high value customer. Well, it is likely that high value is an attribute of a customer that is defined by business logic somewhere else in the application, and the business analyst may have the need to understand as she/he is implementing the decision step. Similarly, she/he may need to understand where else that attribute is used and how, etc…

This is essential – and forgotten very frequently.

It actually reaches extremes not just with the single-metaphor approaches as described above which again have difficulties coping with varying context, but with single “if-then-else” or “when-then-else” rule representations. It is not true that single rules act alone – that is far more the exception than it is the rule (sorry). Those environment do not scale beyond a few tens of rules, and that, however powerful said rules are, is nowhere close what is needed in real applications.

There is much more to be said about this subject, but Carole-Ann is absolutely right on this issue being the biggest one that needs to be addressed to achieve successful decision management applications.

It is actually telling to see the difference in the implementation steps between how Carole-Ann would do it and how it is done frequently elsewhere. Carole-Ann will focus first and mostly on eliciting the proper concepts, representations and flow, then design the interface for business analysts to a point where it is functional for them and they can start using them, and then worry about the actual implementation behind. This has led to large scale implementations with complex but appropriate life-cycle, very dynamic evolution and still excellent run time performance.

Let’s avoid this pitfall going forward.

Tuesday, January 26, 2010

Decision Management as an Academic Discipline

Those of you who follow this blog (or know me) know that I am quite passionate about decision management, and have essentially articulated the past decade of my professional career around the many problems it requires us software and analytic people to solve.

One key issue that keeps bugging me is the lack of support in the academic world for decision management as a full blown discipline available to both business types (your MBA suits) and technical types (your typical CS or analytics specialist).

I wrote a post on this in Carole-Ann Matignon's TechDec - would love to read your comments.

Monday, January 25, 2010

Books: Adaptive systems - Dynamic Networks - Evolutionary Dynamics

I just finished reading "Complex Adaptive Systems: An Introduction to Computational Models of Social Life", by John H Miller and Scott E Page. Interesting book, dealing with problems that are dear to me, although from a fairly different perspective from the one I am used to.

I come to this from the angle of decision management. As Carole-Ann Matignon writes in her blog, a lot of developments are expected to make 2010 an interesting year for the discipline. Dealing with uncertainty is one of them, and one of the key sources of uncertainty is related to complexity.

One aspect that deserves attention is how we can manage decisions in light of the effects of the very high level of interconnectedness between multiple entities, with relations that are complex, non linear, etc. In the business world I have up to very recently been active in, this translates into the effects of customer psychology, mass effects, etc. All problems that frequently vex traditional approaches, and tend to create challenges to standard modeling approaches.

The book properly highlights the key difference between a complex system and a complicated system: the complex system's behavior cannot be explained by reducing it to sub-components and explaining the components and their interactions. Which, in a certain sense, creates a significant challenge to some of the traditional scientific or engineering approaches. That is what makes these complex systems so fascinating: they are composed of a multitude of agents inter-related and inter-acting in massive terms, yet they exhibit emerging behavior similar, a posteriori , to that of a single agent but that cannot be explained by decomposing it.

John H Holland, from the Santa Institute, has written a lot about this subject. His analysis of the characteristics of ComplexAdaptive Systems is often reduced to this: order is emergent (or behavior is emergent), history is irreversible, future behavior is often unpredictable. I am not sure about the last point - it's not as much a question of whether the future behavior is predictable, it's much more a question of whether traditional approaches allow the prediction of future behavior.
The key challenge is how to model that behavior. The typical techniques used in most decision management approaches today do not deal with complex systems - they focus only on simple agent behavior, modeling away through extreme simplification the network effects. When those intervene in real life, the models become totally irrelevant - and I would venture to say that a lot of the events we've seen in the past decade and in particular the last couple of years have shown the limits of the simplification. After all, it could be claimed that network effects are largely responsible for the propagation of the bad loan practice, as well as the reactions when the bubble collapsed.

It reminds me of the classical work "The Structure and Dynamics of Networks", another nice - though less accessible, collection of papers (Jean-Marie Chauvet refers to it in another post - in French but I am sure he will gladly translate should you ask for it). Read it, challenging by parts (some I ended up skipping) but good. And let me know what you got from it.

Not in the same space, but connected, I got "Evolutionary Dynamics: Exploring the Equations of Life" by Martin Nowak for my 14 yo old. Probably a little bit too ambitious for him (I read it first and am still waiting for him to pick it up), but the book is beautiful, and the theme is connected to Complex Adaptive Systems. Nowak is an expert, and his book covers a wide range of analytic approaches that should become part of the arsenal of those studying complex systems.

This whole thing may seem like a scientist's dream (remember, I am not a scientist), but the reality is that the large consumer-oriented companies are already dealing with this kind of problem. Just think for a minute about what happens in an EBay auction, or an Amazon recommendation.

I am looking forward to significant synthesis of the approaches, and, as a result, techniques that will enrich in the future the way we approach decision management.

What's your take?

Wednesday, January 6, 2010

A discovery: PLINQO

I have recently spent some time looking at LINQ-to-SQL to see whether there would be an opportunity to benefit from it and replace things like NHibernate which I am used to. The best result of that effort was the discovery of PLINQO provided by the CodeSmith team you may be familiar with.

Before reading the rest of this short post, check them out: www.plinqo.com (and take also a look at www.codesmithtools.com).

Initially, I was just looking for simple time saving template-based code generation to facilitate the usual fastidious work. I had the surprise to discover that PLINQO also removes a number of the most annoying limitations of LINQ-to-SQL, and that it does that in a safe (no loss of customization, yeah), fairly nice (a lot of developer friendliness in what gets generated) and extensible way.

Three key distinctive features of PLINQO:
- Many-to-Many relationships
- Entity detach
- Complete serialization and cloning
The availability of these three features essentially removes a significant amount of tedious, error prone code with problematic maintenance that LINQ-to-SQL required.

One key area I need to look into is support for the LINQ Dynamic Query Library.

There is much more to PLINQO. More to come.

Tuesday, November 10, 2009

Events and semantics

I spent the last few weeks attending a number of conferences. That gave me an excellent opportunity to talk to customers, discuss with prospects, get an updated view on the industry and the competition.

I also got the opportunity to attend a number of interesting presentations. In particular, at Business Rules Forum (http://www.brforum.com), I attended a peer discussion hosted by Paul Haley (http://haleyai.com/wordpress/). Paul is a well known luminary in the Artificial Intelligence world, and a vocal promoter of a renewal of the knowledge management software landscape.

Paul and I got into a little bit of a debate around how badly the BRMS world – and to a certain extend the so-called CEP world – are faring at dealing with the true complexities of dealing with business knowledge at large. I think our disagreement comes mostly because of the fact that we are looking at different levels in what the software stacks provide. BRMS vendors provide multiple expression layers – Paul focuses on the lowest level one which is misleading because they provide model-driven layers significantly above the low level syntax based ones which enable safe, guided, business compliant and friendly expression of business logic. That’s the key to their success, and it’s only when those layers were introduced that the non-existent BRMS market started growing into its current state. Coming from an AI vendor myself, and having been at the core of that transformation, that’s one thing I am sure about.

Towards the end of the discussion, we got into the importance of event semantics / ontology of events. Great, I absolutely agree.
But what I do not agree with are simplifications that end up leading to slogans such as “a process is an event” or “a decision is an event”. That just creates semantic confusion, and muddies the waters for everybody. And it’s important, because few are those who can spend all day thinking at the level of Paul who knows very well what he means by those slogans, and can easily delve into what they really cover – almost everybody else will be confused. As confused as those who mistake an implementation (OOP-based) with a concept (“an event is an object”).

We need to be careful.

I tend to be more dogmatic about the usage of the terms. Here is what I would say:

- The core notion is that of state of the business. Take that literally. At any point in time, the business that is supported through the implementation at hand is in a given state, and that state has an explicit and implicit representation.The state needs to be fully accessible – we should be able to query against it, to archive it, etc…

- Any change to the state of the business along any dimension represents a transition, which corresponds to a business event.I resist extending the notion of a business event to anything else than a transition of state of the business. In this view, events are dual to states. I can reconstruct the state of the business at any point in time if I know the original state and the complete sequence of events up to that point in time. Conversely, I can re-generate all business events if I know the state of the business at every point in time from the original state to the current state.From this perspective, events have a context in which they occur (the state of business at occurrence time, occurrence time and “location”, time and “location” referential, source, etc…). But what they do not have is duration. This is not illogical – if you consider that your business is moving from state S1 to state S2 and that that takes a given duration, the only reason why the duration is there is because you can observe the state of the business between S1 and S2, which basically means you can decompose your transition in a series of stepwise transitions that you cannot decompose more.This approach has been taken successfully in many real time systems, including some distributed real time systems in which the notion of event is central.

- In this view then, the overall decomposition of a modern enterprise application is much simplified with respect to the happy mess we seem to have today, with overlaps everywhere.
o The business application has an explicit / implicit business state which is always accessible. Typical data management, profile management, state management components play a key role here.
o Changes in state are monitored, sensed, correlated and transformed into business events.This is where event correlators, pattern matchers, etc… play a key role.
o Business events trigger the evaluation of what to do with the event through the execution of business decisions. That is where decision management – essentially built around the BRM (business rules management) capabilities of today – plays the key role.Note that these decisions do not change the state of the business – they read it, they take into account the event (meaning they know what the state was before, what the event is for, what the resulting state is), and they provide the instructions on what to do next.
o The business decisions are executed through business processes. These processes are those who change the state of the business, triggering further events and feeding the same cascading series of steps.This is where BPM (business process management) plays.And to Paul’s point, the execution of a business process, to the extent that it does change the state of the business, manifests itself as business events. But it’s not a one to one mapping, and, definitely, a process is not an event.

This corresponds to http://www.edmblog.com/weblog/2008/11/an-attempt-at-demystifying-cep-bpm-and-brms.html as well as http://architectguy.blogspot.com/2008/11/more-on-cep.html.
This is a simple model. It has the merit that it does not confuse notions.

Paul addressed many other points during that brief session – many of which I agree with and some that I think warrant further discussion. I will cover them in later posts.

Tuesday, October 20, 2009

Unstructured flows

I have not blogged for a while… Too much work, too much involvement in too many decisions with too little time and information. Pretty mind-numbing work.

But it’s time for me to make some of my neurons and synapses to work.

During last couple of weeks, Carole-Ann (www.edmblog.com, www.twitter.com/cmatignon) attended the Gartner BPM summit. One of the key things she conveyed was a fair amount of discussion around the issue of “unstructured flows” and how the industry is addressing them. Besides the fact that there is no unanimity around what to name these flows, there is of course no real agreement on how important they are, and how relevant to problems they are.

I will try to give a first reaction to this in terms of the implications to decision management.

I will assume a very simple distinction between “structured flows” and “unstructured flows”
- “Structured flows” are those that can relatively easily be described in control flow diagrams, with explicit exception management – and by “easily” I mean in a way that can be explained based on a diagram in a way that does not require writing down additional details or having exceptions described through another formalism (something that tends to happen with business exceptions).
- “Unstructured flows” are those that cannot be described that way. They tend to be composed of micro-flows (pre-defined or constructed on the fly) that are stitched together at run time through the recognition of patterns in events.
This may or not correspond to what the rest of the industry sees as distinctions. If not, then just consider these definitions to be specific to this blog.

While I did not attend the conference, I am confronted on a daily basis with this exact issue. Among other things, I am currently responsible for the Enterprise Architecture group at my company, and involved in the architecture and implementation of large enterprise applications – most of which involve both “structured flows” and “unstructured flows”.

One key characteristic I see is the following:
- “Structured flows” tend to cover a large part of the automation of these enterprise applications – but focus on essentially those flows that are fairly clear, require little human intervention, and by virtue of being easy to automate, end up becoming a “must-have” but no longer a differentiator.
- “Unstructured flows” tend to focus on those difficult cases that are – at that point of maturity of the application – not fully automated, and where the interplay between humans (or at least not predictable events) and flows presents the big differentiator in the application – in terms of risk and/or value.

We can take many examples where that is the case.
Take fraud management:
- Automated “structured flows” capture the essence of the known or highly predictable fraud – and catch a large part of the fraud attempts
- But it takes “unstructured flows” to have humans intervene in helping qualify the complicated cases (high value customer, high amounts, etc…) and in identifying new fraud modes
Take insurance underwriting:
- Automated “structured flows” cover anywhere in the 60%-85% range of applications – and almost everybody has it
- But it takes humans involved in “unstructured flows” to deal with the “referrals” which is where the delicate dealing of special cases can help maximize the value/risk ratio


What is the implication of all this for decision management?

Decision management has already been largely involved in improving the relevance of “structured flows” to the business needs and constraints. Up to a large extent, the success of BPM in large enterprise applications can be traced to its ability to isolate the key business decision points, and automate the execution of these decisions in a repeatable and efficient way.
Business Rules Management Systems provide that key mechanism to separate from the flow logic the decision logic in a way that is manageable by the business and controllable by IT. They allow the “structure flows” to cope with the complexities of policies, procedures and practices specific to industries, sectors, enterprises, departments, etc.. And they are at the core at the success of many large scale enterprise applications.

In this kind of applications, the roles are clearly differentiated (even though BPM vendors will argue they handle decision management – they don’t): typically, BRMS handles the decisions, “structure flows” handle the execution of those decisions.
Generalizing it – and referring to a number of on-going discussions around CEP, BPM and BRMS (http://www.edmblog.com/weblog/2008/11/an-attempt-at-demystifying-cep-bpm-and-brms.html and http://architectguy.blogspot.com/2008/11/more-on-cep.html):
- CEP detects business events from the flow of system and application events
- Business events trigger “structured flows”
- Which delegate decisions to BRMS
- And then carry on the execution of those decisions leveraging various integration capabilities

Bread and butter stuff.

Dealing with “unstructured flows” introduces both challenges and opportunities in terms of decision management. Some of these – and I do not intend to be complete here:
- Decisions are taken by clearly outlined “decision services” – implemented through BRMS etc – as well as by less formal (in the sense of software-codified) services – humans in particular; either separately or in conjunction.
- Decisions and actions are stitched together through micro-flows that are triggered through complex event inter-play.
- Since decisions will take into account more informal steps, understanding them and managing their performance becomes significantly more difficult

The last two points are essential.

True decision management for “unstructured flows” will require:
- Understanding events, understanding event correlation and the translation of system/application events into real business events – this is what event management (I hate to try to use the CEP term) should cover.
- Making every effort to understand decisions at large: both the parts codified in BRMS, translated from predictive analytics or simply extracted from policies, procedures, as well as the parts not yet codified there and less formal.
- Including significant collaboration aspects as part of the context of decisions.
- Tracking the performance of the decisions been made to identify both potential for further automation of the informal parts, and to improve the usage made of the corresponding high cost resources.
- Simulating the decisions – including putting to work knowledge gained through the tracking of the decisions made in the informal parts of the decisions.
- Progressively optimizing the unstructured flows through experiments (champion / challenger)

Counter-intuitively (maybe), “unstructured flows” will provide more challenges and more opportunity for decision management technologies and products.