Tuesday, November 10, 2009

Events and semantics

I spent the last few weeks attending a number of conferences. That gave me an excellent opportunity to talk to customers, discuss with prospects, get an updated view on the industry and the competition.

I also got the opportunity to attend a number of interesting presentations. In particular, at Business Rules Forum (http://www.brforum.com), I attended a peer discussion hosted by Paul Haley (http://haleyai.com/wordpress/). Paul is a well known luminary in the Artificial Intelligence world, and a vocal promoter of a renewal of the knowledge management software landscape.

Paul and I got into a little bit of a debate around how badly the BRMS world – and to a certain extend the so-called CEP world – are faring at dealing with the true complexities of dealing with business knowledge at large. I think our disagreement comes mostly because of the fact that we are looking at different levels in what the software stacks provide. BRMS vendors provide multiple expression layers – Paul focuses on the lowest level one which is misleading because they provide model-driven layers significantly above the low level syntax based ones which enable safe, guided, business compliant and friendly expression of business logic. That’s the key to their success, and it’s only when those layers were introduced that the non-existent BRMS market started growing into its current state. Coming from an AI vendor myself, and having been at the core of that transformation, that’s one thing I am sure about.

Towards the end of the discussion, we got into the importance of event semantics / ontology of events. Great, I absolutely agree.
But what I do not agree with are simplifications that end up leading to slogans such as “a process is an event” or “a decision is an event”. That just creates semantic confusion, and muddies the waters for everybody. And it’s important, because few are those who can spend all day thinking at the level of Paul who knows very well what he means by those slogans, and can easily delve into what they really cover – almost everybody else will be confused. As confused as those who mistake an implementation (OOP-based) with a concept (“an event is an object”).

We need to be careful.

I tend to be more dogmatic about the usage of the terms. Here is what I would say:

- The core notion is that of state of the business. Take that literally. At any point in time, the business that is supported through the implementation at hand is in a given state, and that state has an explicit and implicit representation.The state needs to be fully accessible – we should be able to query against it, to archive it, etc…

- Any change to the state of the business along any dimension represents a transition, which corresponds to a business event.I resist extending the notion of a business event to anything else than a transition of state of the business. In this view, events are dual to states. I can reconstruct the state of the business at any point in time if I know the original state and the complete sequence of events up to that point in time. Conversely, I can re-generate all business events if I know the state of the business at every point in time from the original state to the current state.From this perspective, events have a context in which they occur (the state of business at occurrence time, occurrence time and “location”, time and “location” referential, source, etc…). But what they do not have is duration. This is not illogical – if you consider that your business is moving from state S1 to state S2 and that that takes a given duration, the only reason why the duration is there is because you can observe the state of the business between S1 and S2, which basically means you can decompose your transition in a series of stepwise transitions that you cannot decompose more.This approach has been taken successfully in many real time systems, including some distributed real time systems in which the notion of event is central.

- In this view then, the overall decomposition of a modern enterprise application is much simplified with respect to the happy mess we seem to have today, with overlaps everywhere.
o The business application has an explicit / implicit business state which is always accessible. Typical data management, profile management, state management components play a key role here.
o Changes in state are monitored, sensed, correlated and transformed into business events.This is where event correlators, pattern matchers, etc… play a key role.
o Business events trigger the evaluation of what to do with the event through the execution of business decisions. That is where decision management – essentially built around the BRM (business rules management) capabilities of today – plays the key role.Note that these decisions do not change the state of the business – they read it, they take into account the event (meaning they know what the state was before, what the event is for, what the resulting state is), and they provide the instructions on what to do next.
o The business decisions are executed through business processes. These processes are those who change the state of the business, triggering further events and feeding the same cascading series of steps.This is where BPM (business process management) plays.And to Paul’s point, the execution of a business process, to the extent that it does change the state of the business, manifests itself as business events. But it’s not a one to one mapping, and, definitely, a process is not an event.

This corresponds to http://www.edmblog.com/weblog/2008/11/an-attempt-at-demystifying-cep-bpm-and-brms.html as well as http://architectguy.blogspot.com/2008/11/more-on-cep.html.
This is a simple model. It has the merit that it does not confuse notions.

Paul addressed many other points during that brief session – many of which I agree with and some that I think warrant further discussion. I will cover them in later posts.

Tuesday, October 20, 2009

Unstructured flows

I have not blogged for a while… Too much work, too much involvement in too many decisions with too little time and information. Pretty mind-numbing work.

But it’s time for me to make some of my neurons and synapses to work.

During last couple of weeks, Carole-Ann (www.edmblog.com, www.twitter.com/cmatignon) attended the Gartner BPM summit. One of the key things she conveyed was a fair amount of discussion around the issue of “unstructured flows” and how the industry is addressing them. Besides the fact that there is no unanimity around what to name these flows, there is of course no real agreement on how important they are, and how relevant to problems they are.

I will try to give a first reaction to this in terms of the implications to decision management.

I will assume a very simple distinction between “structured flows” and “unstructured flows”
- “Structured flows” are those that can relatively easily be described in control flow diagrams, with explicit exception management – and by “easily” I mean in a way that can be explained based on a diagram in a way that does not require writing down additional details or having exceptions described through another formalism (something that tends to happen with business exceptions).
- “Unstructured flows” are those that cannot be described that way. They tend to be composed of micro-flows (pre-defined or constructed on the fly) that are stitched together at run time through the recognition of patterns in events.
This may or not correspond to what the rest of the industry sees as distinctions. If not, then just consider these definitions to be specific to this blog.

While I did not attend the conference, I am confronted on a daily basis with this exact issue. Among other things, I am currently responsible for the Enterprise Architecture group at my company, and involved in the architecture and implementation of large enterprise applications – most of which involve both “structured flows” and “unstructured flows”.

One key characteristic I see is the following:
- “Structured flows” tend to cover a large part of the automation of these enterprise applications – but focus on essentially those flows that are fairly clear, require little human intervention, and by virtue of being easy to automate, end up becoming a “must-have” but no longer a differentiator.
- “Unstructured flows” tend to focus on those difficult cases that are – at that point of maturity of the application – not fully automated, and where the interplay between humans (or at least not predictable events) and flows presents the big differentiator in the application – in terms of risk and/or value.

We can take many examples where that is the case.
Take fraud management:
- Automated “structured flows” capture the essence of the known or highly predictable fraud – and catch a large part of the fraud attempts
- But it takes “unstructured flows” to have humans intervene in helping qualify the complicated cases (high value customer, high amounts, etc…) and in identifying new fraud modes
Take insurance underwriting:
- Automated “structured flows” cover anywhere in the 60%-85% range of applications – and almost everybody has it
- But it takes humans involved in “unstructured flows” to deal with the “referrals” which is where the delicate dealing of special cases can help maximize the value/risk ratio


What is the implication of all this for decision management?

Decision management has already been largely involved in improving the relevance of “structured flows” to the business needs and constraints. Up to a large extent, the success of BPM in large enterprise applications can be traced to its ability to isolate the key business decision points, and automate the execution of these decisions in a repeatable and efficient way.
Business Rules Management Systems provide that key mechanism to separate from the flow logic the decision logic in a way that is manageable by the business and controllable by IT. They allow the “structure flows” to cope with the complexities of policies, procedures and practices specific to industries, sectors, enterprises, departments, etc.. And they are at the core at the success of many large scale enterprise applications.

In this kind of applications, the roles are clearly differentiated (even though BPM vendors will argue they handle decision management – they don’t): typically, BRMS handles the decisions, “structure flows” handle the execution of those decisions.
Generalizing it – and referring to a number of on-going discussions around CEP, BPM and BRMS (http://www.edmblog.com/weblog/2008/11/an-attempt-at-demystifying-cep-bpm-and-brms.html and http://architectguy.blogspot.com/2008/11/more-on-cep.html):
- CEP detects business events from the flow of system and application events
- Business events trigger “structured flows”
- Which delegate decisions to BRMS
- And then carry on the execution of those decisions leveraging various integration capabilities

Bread and butter stuff.

Dealing with “unstructured flows” introduces both challenges and opportunities in terms of decision management. Some of these – and I do not intend to be complete here:
- Decisions are taken by clearly outlined “decision services” – implemented through BRMS etc – as well as by less formal (in the sense of software-codified) services – humans in particular; either separately or in conjunction.
- Decisions and actions are stitched together through micro-flows that are triggered through complex event inter-play.
- Since decisions will take into account more informal steps, understanding them and managing their performance becomes significantly more difficult

The last two points are essential.

True decision management for “unstructured flows” will require:
- Understanding events, understanding event correlation and the translation of system/application events into real business events – this is what event management (I hate to try to use the CEP term) should cover.
- Making every effort to understand decisions at large: both the parts codified in BRMS, translated from predictive analytics or simply extracted from policies, procedures, as well as the parts not yet codified there and less formal.
- Including significant collaboration aspects as part of the context of decisions.
- Tracking the performance of the decisions been made to identify both potential for further automation of the informal parts, and to improve the usage made of the corresponding high cost resources.
- Simulating the decisions – including putting to work knowledge gained through the tracking of the decisions made in the informal parts of the decisions.
- Progressively optimizing the unstructured flows through experiments (champion / challenger)

Counter-intuitively (maybe), “unstructured flows” will provide more challenges and more opportunity for decision management technologies and products.

Wednesday, March 4, 2009

Predictions and Surprises

WSJ has this interesting series demystifying - or at least discussing - issues around numeracy, probabilities, and the corresponding impact.

I just read the following: http://blogs.wsj.com/numbersguy/the-crash-calculations-621/

A couple of key things that I feel are not really covered deeply enough:
- the quality, and ultimately the validity of predictions in a given context are a direct function of the relevance to that context of the explicit and implicit assumptions in the modeling process used to create the prediction
- decisions should never be made only on predictions obtained through models, they should include scenario based simulation and impact analysis

The reason we build models is precisely to create abstractions that we can manipulate with the tools of our mind and our technology. Tools that allow us to get out of the immediate sensor-driven reaction mode, and forecast. I believe modeling is essential to forecasting - I know some will say just crunching numbers with no a priori model is the path of the future (a Wired article I read one day) but that's a fallacy: as soon as you use the result of the crunching, you are using a model, maybe an implicit one, but you are using a model.

But who says abstraction says context-dependent simplification. And that is key. Understanding the context the abstraction assumes, and the sensitivity of the resulting model and predictions to variations in that context is paramount to being able to leverage these predictions.

That gets lost. Because it's complicated and because of the multiple psychological aspects that make understanding and leveraging statistical and probabilistic results very difficult for the average person.

See the points made in the EDM blog by Carole-Ann (http://www.edmblog.com/).

Monday, February 23, 2009

Netbooks accelerate The Cloud?

The move to "netbooks" and other always connected devices seems to finally be here to stay - it's been talked for a while, but the convergence of technology and economics to make it a reality is now with us.
There is a very interesting article in Wired (if I remember well) that chronicles the "strange" history of the new generation of "netbooks": how something that was created to address the needs of those who cannot afford the current laptops ended up filling a real need even amongst those who are the target of these current laptops. Essentially, we have reached the point in which these tools are becoming appliances, judged more by the function they actually fulfil than by the performances they can exhibit regardless of our need for them.

Combined with the emergence of The Cloud, this shift promises a change in the way all of us, consumers and enterprises, will interact with applications. It seems obvious, but the trend is accelerating. Applications will be at least partially on The Cloud, they will be multi-device, multi-access, asynchronous.

That will bring a series of challenges to us, software architects. We will need to make The Cloud ensure the viability of the model for complex, long lived, IP rich, confidential, secure flows. We will need to make the asynchronous models work well. We will need to make the user interfaces multi-device and secure, etc...
One proof this is moving forward, and fast, is that companies such as Microsoft, which do not create trends but make them mainstream, are getting there. The Azure effort (http://www.microsoft.com/azure/) is well known, but they are thinking across the board on user interfaces, on new browser (the window to The Cloud) architecture approaches (http://research.microsoft.com/pubs/79655/gazelle.pdf).

It will also bring challenges to the business models for software products, of course. The current models are not adapted, and it will take time before a norm gets established.
It will also bring new security needs: big cloud centers will need to be protected against all sorts of attacks, cyber and physical, etc.

Interesting disruption.

Friday, February 6, 2009

How to castrate a bull (not what you think)

It's not in my nature to spend a lot of time reading management books. I do, however, have a real interest in books that deal with the decision making (or frequently decision avoidance) process in organizations. Issues in decision making are at the confluence of many disciplines, which fascinates me. Furthermore, my professional career has been devoted to software tools supporting decision management, an incredibly valuable domain that is still in its infancy.

In one of my recent Amazon shopping sprees, I ended up putting in my basket "How to Castrate a Bull: Unexpected Lessons on Risk, Growth, and Success in Business", a great book written by Dave Hitz, who is one of the Silicon Valley's recent legends, one of the co-founders of NetApp.

If you are an entrepreneur, creating a venture or scaling up a company, you will find this book inspiring, and you will draw many lessons from Dave Hitz's experiences.
A few that resonated very strongly with me:
- the value of dissent in a growing organization, how it needs to be cultivated and not managed-out
- multiple takes on what how to make effective decisions in a perpetual risk-all start-up, how managing the consequences of hard and early choices is as important as making the choices
- what makes the culture of a company, the pressures that make it evolve, and how to manage its transitions without losing identity and the momentum it creates

And it is a fun read. Which for me has been a break after a few weeks reading Spartan technical books.

Recommended.

Thursday, January 8, 2009

The cloud matters

Well, this is certainly going to be "duh"-obvious, but it's clear that we are entering the year of The Cloud. The rapid-fire succession of announcements from major players of one or another form of cloud offering is not just the result of flock mentality or boredom early in the year: it's a symptom of the relevance the cloud has reached.

Why now?

- A first reason is the current crisis. On each and every project, controlling cost has become the core preoccupation. And I really mean control as in exerting control over it, predicting it, managing it, not just containing it.
- A second reason is the availability of manageable technical solutions to support cloud-based platforms and applications. We are beyond cloud-supported data or document storage, we are now in the days of cloud-supported services and netbooks.
- And a third reason is the emergence of cloud providers with enough clout to allow significant early adopters to take the rational bet of overcoming the remaining complexities and confusions and start leveraging the offerings. Traditional platform vendors, major Web Commerce players, innovative newcomers.

The confluence of these three trends is - in my neophyte opinion - the key for the unavoidable success of The Cloud in 2009.

How will that happen?

- The economic drivers make The Cloud more relevant. Projects will focus on the management of cost, and The Cloud's inherent support for controllable scalability will make it the most attractive platform to work on.

- The innovative development community - including software vendors as well as system integrators - will flock to The Cloud, will support the development of the corresponding tooling (IDEs, platforms/middleware, ...) and will ensure the success of the various cloud-based or cloud-supported platforms (PaaS) on which service-based (SaaS) applications will be created and/or composed.

- The support for dedicated governance and management provided by cloud-based or cloud-supported platforms will prove to be very attractive to businesses having been burnt by previous attempts to create on-premise major SOA applications.

- Governance, life cycle management, etc, will become sources of differentiation between cloud offerings, and that will lead to significant innovation and momentum in that very key domain.

- Ubiquity and better governance and management will enable more customer/collaboration/social-centric (rather than process-centric) applications, shifting the focus from typical B2B/B2C to Business-to-Community (B2Comm?).

- Which will create further need for differentiation, better quality services, easier to assemble into high value evolving adaptive applications taking as part of their essence the various communities (users, analysts, customers, etc..)

- Development will be permanent, deployment frequent, adaption constant. New cultural points of view will be included in the applications.

- Etc.

All this, all this enabled by the fact that The Cloud makes the applications cost manageable. It becomes something that can be throttled. Just think about it: how can we compare the cost manageability of a major cloud-supported enterprise application versus a major J2EE enterprise application (if you want to suffer, say Websphere)?


What to be careful about?

Applications, in particular the "enterprise" applications I deal with, live and die through the quality of the decisions they support. An application can be very beautiful and execute very fast and in a secure way, but it fails to generate the business value that is expected, it's growth - or even it's survival - is questionnable.

Enterprise Decision Management (EDM) or Decision Management (DM) addresses that. In the pre-cloud days.

The challenge is now to think about how to approach EDM / DM for The Cloud.

Interesting times.

Saturday, January 3, 2009

A little more on risk mismanagement

The NYT just published an interesting analysis of the role played by modern financial models in the current meltdown [http://www.nytimes.com/2009/01/04/magazine/04risk-t.html?_r=1&partner=permalink&exprod=permalink&pagewanted=all]. The article highlights the particular role played by VaR models and the institutionalized and improper reliance on that kind of models.

I have blogged about this in the past. It's common knowledge - although also commonly ignored - that mathematical models are just that - models. They operate under sets of assumptions that need to be understood to be applied.

But as pointed out earlier [http://architectguy.blogspot.com/2008/12/financial-instruments-systems-analysis.html], the key problem from a technical standpoint is that no real system analysis has been attempted on the complex combination of financial instruments being put to work. No exception or error propagation analysis, no interface consistency analysis, etc. These are all words that are foreign to the daily practice of these instruments. Quants are not system analysts.

To the industry's credit, it may well be that the sheer complexity that the systems analysis entails is such that it is practically unfeasible. If that ends up being the case, regulation-based restrictions on these instruments in the largest financial markets may help reign in the complexity - at the cost of reduced creativity. The current meltdown can explain how it is that such a move would probably be positive in the short to medium term - and would avoid having the tax payers foot a huge bill to bail out an already highly rewarded and ultimately irresponsible industry.

If the financial industry wants to avoid over-regulation, it will have to prove it can control its risk, through a combination of better overall systems analysis and better understanding of the decisions made.

Decision management is gradually including the relevant aspect of systems analysis, and will turn to be an unavoidable piece of the core processes that use the complex financial industries. You need to be prepared to deal with the unexpected and understand the impact of the decisions you make.