Cluster Configuration for the Oracle Service Bus (OSB)

I was speaking to a colleague today on IM trying to explain clustering configuration for the OSB (or SB for those on 12c now), which is quite simple but verydifficult to explain over IM. So I thought a quick entry here would be useful for my colleague and anyone else who might find this interesting.

The Setup

For this example, we’ll assume a simple cluster with – perhaps – the most complicated possible setup of host names. The configuration we’ll use is as follows:

  • 4 Hosts – 2 web servers and 2 OSB servers
  • 2 Managed Servers – one OSB managed server on each host
  • 2 Web Servers – one web server on the other two hosts
  • 1 Load Balancer

Some Terminology

The terminology in this area can always cause a bit of confusion so some definitions are in order …


This is a server (physical or virtual) is may have many IP addresses and names associated with it

Managed Server

This is a WebLogic concept. It is a WebLogic instance that is associated with a domain. Among other things, the managed sever has its own JVM and network port.


A cluster is also a WebLogic concept. Multiple managed severs may be grouped together in a cluster to allow for load balancing and other operations. For 11g (I’m not sure about 12c) you can only define one OSB cluster in a domain.

Cluster Address

The cluster address is simply a comma separated list of names and ports for the managed servers that are part of the cluster. For this example, our managed servers are running on hosts myserver1 and myserver2 on port 8011 so the cluster address would simply be


This address style is understood by WebLogic (and not much else), which is why we will use the web servers to help us out.

Load Balancer

A load balancer may be implemented in any number of ways. For this example we will assume that there is a “thing” in out network topology associated with a single public DNS name.


So for this example, we will use the details as depicted in this diagram


The Interesting Part

So after all that, we can get to the “good stuff”™

Managed Server Configuration

In order for everything to work correctly, we need to make sure the managed servers use their DNS name for their External Listen Address.The reason for this – as you’ll see later – is that the WebLogic webserver plugin in the web server will do a reverse name lookup for the IP address of the server and if it doesn’t match then it won’t work! Often, the managed server listen address is set to the IP address of the server as shown here

1414681745 DC4F8D9E 7732 4A91 9112 6AF464B7E01C

This is fine but we need to set the real DNS name of this server as well. In the advanced section of the managed server consiguration you will fine the external listen address where we set the real DNS name

1414681976 728DAE5F 598E 4133 ABE3 819CCF84D370

The result of this configuration is that the OSB will use this name as it’s endpoint – for example when the reference the ?WSDL functionality of a proxy.

Cluster Configuration

As we discussed earlier, the cluster address is a simple thing but you can’t use this through a web browser or SoapUI or anything that’s not WebLogic really so we need to configure the cluster to return the right thing. We need to go to the cluster in the admin console and select the HTTP tab, where we can configure the Frontend Host. This will let the OSB tell the outside world to use the Load Balancer name in order to access the cluster.

1414682448 BD2859F3 A9FC 435F 9034 E98B9FB04B38

Simple as that!

The Webserver

Oracle supports a number of different Web servers with the WebLogic webserver plugin and the configuration is more or less similar for each of these. I wrote a quick example for IIS some time ago here

The basic idea is that you specify the cluster address in the configuration so …


The actual configuration will vary by web server so you should have a look at the Oracle docs here


So that’s pretty much it. The way it works is …

  1. A client requests a URL from
  2. The load balancer directs the request to WebHost1 or WebHost2
  3. The web server invokes the WebLogic webserver plugin and looks up the cluster address
  4. The WebLogic webserver plugin connects to myosb1 or myosb2 over t3 on port 8011
    1. This is where the reverse lookup is important
  5. The OSB server processes the request and sends the results back to the WebLogic webserver plugin using the name myosb1 or myosb2
  6. The WebLogic webserver plugin returns the results using the name
  7. Everyone is happy.

Although it seems like there’s a lot going on, the structure is actually quite simple and allows all your clients to use a single URI to access services and ensure that all the internal details of load-balancing and fail over etc. are all hidden away.

Tagged with: , , ,
Posted in Fusion Middleware, Oracle, OSB

OIM and OAM Resource Cluster Targets

Oracle quite conveniently provide guidelines for targeting resources to clusters / managed servers / etc. in the Enterprise Deployment Guide for SOA Suite. For those how haven’t seen it , it’s available here

Unfortunately, there doesn’t seem to be such a guide for an Enterprise Deployment of OAM and OIM, which presents a problem when scripting such a deployment.

I have compiled the tables below to provide targets for these specific components in hope that it helps anyone scripting a OAM/OIM domain.

These tables do not repeat the information provided for the SOA components and should be used as an extension to the SOA targets


oam_server# oam_cluster
oim# oim_cluster
FMW Welcome Page Application# AdminServer
DMS Application# WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
wsil-wls WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
wsm-pm WSM-PM_Cluster, oim_cluster, (soa_cluster)
oamsso_logout# oam_cluster, AdminServer
Nexaweb oim_cluster
OIMMetadata# oim_cluster
OIMAppMetadata# oim_cluster
spml-xsd oim_cluster
SodCheckService oim_cluster
ProvCallback oim_cluster
oracle.iam.console.identity.self-service.ear#V2.0 oim_cluster
oracle.iam.console.identity.sysadmin.ear#V2.0 oim_cluster
RoleSOD oim_cluster
Reqsvc oim_cluster
am_admin# AdminServer
TaskDetails soa_cluster

Libraries OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oim_cluster, oam_cluster OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oim_cluster, oam_cluster SOA_Cluster,BAM_Cluster,AdminServer, oim_cluster, oam_cluster OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oim_cluster, oam_cluster OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oim_cluster, oam_cluster
oracle.wsm.seedpolicies#11.1.1@11.1.1 WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
oracle.dconfig-infra#11@ WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
orai18n-adf#11@ WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
oracle.adf.dconfigbeans#1.0@ WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
oracle.pwdgen#11.1.1@ WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
oracle.jrf.system.filter WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
jsf#1.2@ WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
jstl#1.2@ WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
UIX#11@ WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
ohw-rcf#5@5.0 WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
ohw-uix#5@5.0 WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
oracle.adf.desktopintegration.model#1.0@ WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
oracle.adf.desktopintegration#1.0@ WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oam_cluster, oim_cluster
oracle.sdp.client#11.1.1@11.1.1 BAM_Cluster,SOA_Cluster, oim_cluster, oam_cluster
oracle.webcenter.composer#11.1.1@11.1.1 AdminServer, oim_cluster, oam_cluster
oracle.idm.common.model#11.1.1@11.1.1 AdminServer, oim_cluster, oam_cluster
oracle.idm.uishell#11.1.1@11.1.1 AdminServer, oim_cluster, oam_cluster
oracle.idm.ipf#11.1.2@11.1.2 AdminServer, oam_cluster
oracle.idm.ids.config.ui#11.1.2@11.1.2 AdminServer, oam_cluster
coherence# AdminServer, oam_cluster
oracle.oaam.libs# oam_cluster
oracle.rules#11.1.1@11.1.1 BAM_Cluster,AdminServer,SOA_Cluster, oim_cluster AdminServer, oim_cluster
oracle.soa.workflow.wc#11.1.1@11.1.1 SOA_Cluster, oim_cluster
oracle.soa.worklist.webapp#11.1.1@11.1.1 SOA_Cluster, oim_cluster
oracle.soa.rules_editor_dc.webapp#11.1.1@11.1.1 SOA_Cluster, oim_cluster
oracle.soa.rules_dict_dc.webapp#11.1.1@11.1.1 BAM_Cluster,SOA_Cluster, oim_cluster
oracle.sdp.messaging#11.1.1@11.1.1 BAM_Cluster,SOA_Cluster, oim_cluster
oracle.soa.workflow#11.1.1@11.1.1 SOA_Cluster, oim_cluster
oracle.iam.ui.model#1.0@ oim_cluster
oracle.iam.ui.view#11.1.1@11.1.1 oim_cluster
oracle.iam.ui.oia-view#11.1.1@11.1.1 oim_cluster
oracle.iam.ui.custom#11.1.1@11.1.1 oim_cluster
oracle.grcc.oaacg#8.6.1@8.6.1 oim_cluster
org.bouncycastle.bcprovider#1.1@1.36.0 oim_cluster


opss-DBDS soa_cluster, oam_cluster,oim_cluster
oamDS oam_cluster, AdminServer
mds-owsm WSM-PM_Cluster,AdminServer, oim_cluster
oimOperationsDB oim_cluster
mds-oim oim_cluster
oimJMSStoreDS oim_cluster
ApplicationDB oim_cluster

Self Tuning

MaxThreadsConstraint-0 oim_cluster
MaxThreadsConstraint-1 oim_cluster
OIMUIWorkManager oim_cluster
OIMMDBWorkManager oim_cluster


Module-FMWDFW WSM-PM_Cluster,OSB_Cluster,BAM_Cluster,AdminServer,SOA_Cluster, oim_cluster, oam_cluster
Tagged with: , , , , , ,
Posted in Fusion Middleware, Oracle, SOA

Being Agile: BAU vs New Project

Following on from the recent blog about some of the key success and failure factors of Agile implementations, I wanted to discuss some of the differences in approach when attempting to tackle new projects in an Agile manner, as opposed to Business as Usual (BAU) work.

When conducting BAU work, the value in following an Agile methodology is really about three things:

  1. Delivering value to the customer regularly
  2. Delivering the current highest priority requirement (enhancement or defect fix)
  3. Delivering changes in requirements or priorities of requirements to the customer without a significant delay

The basic idea being that at the beginning of every sprint (generally a 2 – 4 week cycle) a selection of enhancements and possibly defects in the form of user stories are identified and committed to by the team for delivery at the end of the sprint.  Within that sprint, should the priorities change such that waiting for the following sprint isn’t an option, the scope can be changed.  This comes at a cost – e.g. the likely non-delivery of other some other committed-to items – but it can be done and allows the customer to see the value of a changing requirement fairly quickly.

The Product Owner will ultimately decide if they want to actually release the items that are ‘Done’ in a sprint to a Production environment but the goal is that they should be able to if they want to.

New Projects are Different

By new projects, I refer to the creation of a product that doesn’t already exist or the complete overhaul of an existing one – architecturally or from a look and feel perspective.  I have seen a number of new projects fail by organisations taking the same (usually Scrum) approach they have been using successfully whilst working in BAU mode and applying it directly.

Some of the aspects of Waterfall approaches worked fairly well for new projects.  They generally specified distinct phases that allow the activities that are required for a new project.  Take a traditional Waterfall phased approach like below:


The Waterfall model suggest that these phases should be distinct such that one only starts when the previous one has finished.  These phases still apply regardless of whether you are conducting BAU work or working on a new project – the main difference being that the phases will be much shorter for BAU work as you will generally only be delivering a few items in some form of maintenance release.  Whichever mode you are working in, the distinct characteristic of Waterfall is that the analysis for EVERYTHING you are intending to release is conducted before the requirement specification phase begins.  Then the requirement specification phase for EVERYTHING you are intending to release is conducted before the design phase begins and so on and so forth.

Now, if you are working in BAU mode with regular maintenance releases, the negative impact of taking this kind of approach is not that significant but let’s consider a new project that is estimated to take about a year to complete.  By the time you get to the testing and integration phase a significant amount of time has already passed and in that time, the requirements will almost certainly have changed (how many businesses / customers know exactly what they want a year before they get it?) and only at the point of testing where you actually start to see the software that has been developed do any issues relating to any of the previous phases become truly apparent.  By this time, the cost of fixing these issues is likely to be huge and the only way to cope with a change in requirement is to go right back to the beginning of the cycle again.

This is Where Scrum Comes In

In a nutshell, Scrum suggests breaking requirements down in to the smallest deliverable and valuable chunks and only working through these phases for the current highest priority items (or functional areas).  Also, it suggests that these things can happen in parallel for a number of different user stories at the same time (not too many!) with the goal of delivering a small chunk of something to a tester at the earliest possible opportunity and thus being able to show any relevant stakeholders what that small chunk actually looks like at the earliest opportunity rather than a scenario where all requirements are developed before delivering to a test environment and all requirements are tested before delivering to some kind of user acceptance testing environment.  Only conducting analysis, requirements specification and design on the next priority items means that you don’t waste time conducting these activities for requirements that may well have changed by the time the team is in a position to develop them.

So, I think people understand the value of using an Agile methodology to deliver value to a customer but here’s why the approach needs to be slightly different when working in a BAU mode and when working on a new project.

Let’s talk about BAU first.  You have an existing product.  You have an existing product backlog containing high level user stories.  Each sprint, the Product Owner will select the highest priority items from the backlog, the team will commit to a set of user stories they believes they can deliver by the end of the sprint.  The team will work on them and get them to a ‘Done’ state, a short period of regression testing will likely occur (preferably automated but manual if necessary), the Product Owner will review what has been ‘Done’ and decide whether they want to release these ‘Done’ items to Production – which they likely will when working in BAU mode.  The analysis and design phases will generally be very short in this kind of scenario because the likelihood when working in BAU mode is that nothing hugely significant will be changing behind the scenes in order to deliver a set of user stories (if it was – you’d probably be calling it a new project!).

Organisations often get comfortable with this approach and when it comes to working on a new project, a Scrum Team is formed and the team attempts to start delivering value to the customer from Sprint 1 without taking in to account the following considerations – which are the typically some of the characteristics of a new project:

  1. You have no existing product – therefore
  2. You have no existing architecture for the product
  3. You have no existing product backlog
  4. You have no existing regression scripts (automated or manual)
  5. You have no existing unit tests
  6. You have no existing documentation

Whilst some of the analysis, requirements specification and design (fleshing out of user stories) can still be done on a sprint by sprint basis – eliminating the Waterfall issue of conducting these activities for all requirements for the project before any development and testing can begin – there is still a need on a new project to conduct these distinct phases before any development can begin.  During these phases, you would be doing the following types of activities:

  1. Gathering high level requirements and forming a product backlog
  2. Considering the architectural requirements for the project as a whole
  3. Considering what needs to be in place in order for the project to start
  4. Creating a / some proof of concept(s) to determine whether the project is doable

Likewise, whilst testing user stories can be conducted on a sprint by sprint basis and reviews of what have been delivered can be conducted with the Product Owner / users on a sprint by sprint basis, there is still a distinct phase of integration testing, user acceptance testing and preparation for a production release that simply needs to be done when you are actually intending to release a ‘Ready’ product to Production.  Depending on the nature of what you are building, this may only be at the end of the project or there may be distinct phases throughout the project where there is value to the customer in delivering what has been ‘Done’ so far.

But Scrum says I should be able to release at the end of every Sprint?

Yes, it does but that doesn’t mean that you should.  In a new project scenario where you are creating a product from scratch, whilst you will hopefully have some stable software in a working state at the end of each sprint, until enough user stories have been completed whereby delivering what has been built will actually deliver any value to the customer, it would make no sense to do so.  You may consider deploying to some kind of integration or staging environment but again, it only makes sense to do so if there is a purpose of doing so.

There are overheads involved in releasing to Production such as integration testing, full regression testing, user acceptance testing, release preparation and documentation, knowledge transfer to operations or support teams etc.  These activities obviously impact the amount of development and testing that can be conducted on user stories so until there is value in releasing to production for a new project, there is little value in doing so.

If we are not releasing to Production every Sprint, why use Scrum at all?

Well, the value in using Scrum for a new project is less about quick delivery of a finished product to Production and more about mitigating risk and coping with changing requirements.  Using Scrum for new projects when it comes to fleshing out, developing and testing user stories and reviewing these with the Product Owner allows early visibility of the team going off track, early visibility of what the product is actually going to look like and allows the user to change their mind about the requirements without significant impact to the project.

In summary, whilst pure Scrum works well for BAU work, my experiences have led me to the conclusion that using Scrum plus some of the more traditional techniques suggested in the Waterfall model for new projects has been the most successful approach and the one that has most often led to the outcome that all customers want where the required scope is delivered on time and on budget (or even under!).

Posted in Agile

Being Agile: Key Success and Failure Factors

I was first introduced to Agile methodologies about 8 years ago, specifically the Scrum framework.  Like many people, I was sceptical at first.  I had worked in environments using waterfall methods, some very successfully for many years and the idea of throwing away heavily detailed requirements documents in favour of lighter weight user stories that were ‘fleshed out’ by the team via discussions seemed bizarre to me at first.  It felt like trying to fix something that wasn’t really broken.

After a few months, however, as the team and the organisation that first introduced me to Agile became more mature and confident in its implementation, I really got to see the benefits that Agile can bring when done well:

  • How much quicker you can respond to a customer’s needs
  • How much better you can cope with changing scope
  • How much easier it can be to mitigate risk
  • How much earlier you can gain visibility on progress

Since my initial introduction to Agile, I have had many opportunities to experience Agile done well and Agile done not so well and to understand the factors that influence its success and its failures.  This blog post aims to share some of my observations of when Agile works well, when it doesn’t and to discuss some of the different approaches that I have seen work best in different circumstances.

My first experience of Agile has turned out to be the most successful.  There were a number of things about the organisation I was working for at the time that made it an ideal candidate for using Scrum:

  1. The organisation had a number of existing products that required fairly regular, small enhancements
  2. Each product had a clearly defined group of people on the business side that were ultimately responsible for the product
  3. IT Teams were co-located and in almost all cases, they were also co-located with the business
  4. The organisation at a Senior Management and board level was open to change
  5. Existing products were largely in Business as Usual (BAU) mode

It certainly isn’t the case that Scrum or other Agile approaches can’t be successful  in an environment that doesn’t have all the characteristics listed above but it does mean that there are extra and different challenges that may require more effort to overcome.

The characteristics of this particular organisation weren’t the only factors in the successful implementation of Agile to the organisation.  The key driver for success was the approach to implementation.


Once identified as a good candidate environment for using the Scrum framework, the head of the IT department – the Agile champion – understood that in order for the implementation of Agile to be successful, it had to be a joint effort – i.e. a joint effort between the business and IT.  He understood that being Agile wasn’t just a way to for IT folk to develop software, it had to be a more of a culture – a culture for the organisation as a whole – and he spent significant time and effort ensuring that not only were the Senior Management from both the IT and the business side engaged but also pursued full engagement at board-level.  Implementing Agile was going to require some changes to be made to the organisation – some fairly dramatic ones and this wouldn’t be possible without backing at the highest level.  It was understood that Agile wasn’t a silver bullet and wouldn’t solve the organisation’s problems overnight and it was understood that in order for it to be successful, a significant amount of time and money would need to be invested initially before the benefits started to be seen.

Once the organisation as a whole was on board and ready for this fairly significant change in delivering value to the customer, the company invested a lot of time and money in to the approach by taking the following steps:

  1. Product Owners were identified from the business.  Strictly, there was one Product Owner per product.  That Product Owner may well decide to designate certain activities to other team members and they may well represent the sometimes conflicting views of a number of different stakeholders from the business side but ultimately, and this point is key, that one person was the decision maker from the business perspective.
  2. Product Owners were trained and their roles and responsibilities were adjusted to make them available to carry out Product Owner-related activities.
  3. Multi-disciplined IT teams were formed (Scrum Teams) and aligned to a product or a suite of products (i.e. instead of having a team of developers; a separate team of testers; a separate team of business analysts etc., a single team was formed that consisted of the key disciplines needed for that team to deliver value to the customer).
  4. All IT team members were trained.
  5. All Scrum Teams were empowered in a true sense.  Again, I believe this was key to the success of Agile at this organisation.  Each Scrum Team could decide:
    • The ideal length of sprint
    • The method for estimation (hours, points, t-shirt sizes, it didn’t matter, it was clear that the estimation was only significant for that Scrum Team and not something that could be measured holistically)
    • Their own standards for methods of communication
    • Their own methods for documentation, knowledge sharing
    • Their own processes, essentially.  All Scrum Teams were given guidance in the basics of Scrum but the specifics of exactly when meetings took place, how support issues were handled etc. were decisions that were made by the team.
  6. Scrum Teams were responsible for all aspects of product development, including support.  How they managed support was up to the team.
  7. Scrum Teams were encouraged to continually ‘inspect and adapt’ – identify processes, activities and methods that were working, identify those that were not and come up with their own ideas of how things could be done better.
  8. Scrum Teams were urged to share their successes, failures and lessons learned with other teams so that the organisation as a whole could benefit from the experience.

The implementation of Agile was not perfect, of course.  No significant change in the way a company does something is going come without pitfalls and mistakes but the key to success is being open to identifying those things that are not working and being in a position to change them.  For example, one of the things that happened when this particular organisation first started using Agile and requirements became user stories in a Product or Sprint backlog was that general product documentation was not being produced or maintained.  The focus became delivering as much value as possible to the business at the end of each sprint and some of the artifacts or activities that didn’t directly contribute to that goal were being sacrificed to achieve this goal.  Some examples of the kind of thing I am referring to here are technical documentation, business process documentation, user documentation and code refactoring.  The problem with keeping the focus always on delivering value to the customer quickly in lieu of some of these activities that provided more longer term value is that it comes back to haunt you later in the form of technical debt, overhead in training new team members and many other ways.  After some time, this was identified as a problem and each team had to deal with it but, again, how each team dealt with it, was up to them.  Some teams chose to factor in a certain amount of time for each sprint to conduct some of these activities.  Some teams chose to have a ‘synchronisation’ sprint every few sprints – the frequency of which being determined by the team – where all team members would focus on activities that would reduce technical debt, assist with knowledge transfer etc.

I think what also heavily contributed to the success of Agile at this organisation was the fact that the IT Senior Management continued to engage the business and board members in the process long after the initial implementation of Agile and held regular open discussions with them about how it was working, not working, what they thought could be done better.  This continuous process of inspecting and adapting at an organisation as well as Scrum Team level never stopped.

So, I’ve talked a lot about how Scrum can work well.  To give some balance to this topic, I want to take some time to share my observations about when Scrum doesn’t work so well.

The Type or Organisation

As I mentioned earlier in this post, one of the key factors in using Scrum successfully in my experience has been about empowering Scrum Teams and allowing them to make their own decisions where appropriate.

In an ideal world, all organisations wishing to implement Agile would have an environment where teams truly could make their own decisions about how they do things and when they deliver to the customer but in reality, for many organisations, this just isn’t realistic.  I have found that organisations that are heavily regulated and audited, such as those in the Financial Services industry, particularly struggle with the adoption of Agile.  In these kinds of organisations, there is often a need to be able to provide evidence that supports some kind of decision that has been made by the organisation quickly and in a standard format.  The nature of this means that these companies tend to be more heavily focused on fairly fixed standard processes for all, standard tool use for all, fixed delivery windows for all etc. etc.  By definition, Scrum Teams simply can’t be self-organising and truly empowered in this kind of environment.  That isn’t to say it isn’t possible to implement Agile methodologies in these kind of organisations, but it is to say that it is a far greater challenge and that these organisations will have to work harder to find a ‘flavour’ of Agile that works for them and be prepared to make far greater changes to the current organisational set up.

Over-Standardising the Process

I am not suggesting that having standard processes is a bad thing and should be avoided by any organisation wishing to become more Agile.  Having standard processes for certain activities is good and necessary.  Having too many, too heavy, too inflexible standardised processes is likely to make being Agile impossible.  For example, having a standard tool that everyone uses to capture requirements (user stories) makes sense.  Standardising the exact format all user stories should take and the specific method that should be followed to gather and groom the backlog in which they reside, doesn’t.

The problem I am getting at here is when organisations implement Agile for the sake of implementing Agile rather than for the sake of improving the speed at which the organisation can respond to change, mitigate risk and deliver value to the customer which, in my mind, is really what it is all about.  Implementing Agile holistically is important in terms of creating a culture and environment in which the benefits of Agile can be realised but implementing fixed interpretations of Agile methodologies as standard processes holistically – something I have seen done in a number of organisations – takes away the fundamental ideology of being Agile that makes it work, such as the self-organising and empowering aspect of it.  The key to success is having the common sense to identify what to standardise and what not to.  For example, standardise the information that each Scrum Team has to provide to stakeholders in order to allow for a quick response to an auditing or regulatory request, standardise where that information is held so that it can be accessed with ease but don’t standardise the method the teams must take to collect that information.  As another example, standardise a set of documentation that each Scrum Team should provide and be responsible for the updating and maintenance thereof but don’t standardise the exact format of the documentation – allow teams to document their product in the manner that best suits them.

The Business Not Being Fully Engaged

This is something I have seen happen a lot and one which is a key factor in Agile being unsuccessful.  When an organisation, whose primary business is not IT – i.e. one where IT is a supporting function to the business – tries to implement Agile from an IT perspective without fully engaging the business, it leads to all sorts of problems which are, in my opinion, impossible to solve from an IT perspective alone.  The issue of the business not being fully engaged / on-board can present itself in a number of common ways:

  1. No Product Owner.  Without a key decision maker from the business, as an IT department / team, you are simply guessing what the business wants and hoping that what you deliver will provide value
  2. A Product Owner that doesn’t understand their role or have the time available to conduct Product Owner activities.  This is a fairly common issue in implementations of Agile, particularly Scrum, that I have seen fail.  A Product Owner who can make decisions but isn’t available to respond to the Scrum Team, prioritise backlog items and be available to review ‘ready’ software fairly promptly is likely to create a scenario where teams are continually carrying items of work over to the next sprint or where what is delivered isn’t what the business actually wanted.
  3. Multiple Product Owners.  Again, a fairly common issue I have come across.  This is where a Scrum team have more than one person from the business designated as the decision maker.  Often these people have conflicting viewpoints and this leads to time being wasted discussing and negotiating different viewpoints and the likely delivery of software than none of the multiple Product Owners are happy with.

Not Adapting the Process to Fit The Circumstance

Another common mistake I have seen organisations make is in selecting a framework to follow and not adapting it for changing circumstances.  For example, an organisation may decide that the Scrum framework will work best for them and invest a lot of time and effort in implementing it but fail to adapt the approach for different circumstances.

From my experience, there are three main scenarios in which software is developed and delivered:

  1. New Projects: Building a new product from scratch or a complete overhaul and re-work of an existing product.
  2. Business as Usual (BAU): Delivering software for an existing product consisting of enhancements and non-emergency defect fixes on a frequent (generally scheduled) but not immediate basis
  3. Support:  Dealing with customer issues and if necessary, delivering software containing  emergency defect fixes without the need to wait for the next scheduled release

The purest form of Scrum and the one that most people understand and feel comfortable with works pretty well in a BAU scenario but when it comes to new projects and dealing with support issues, trying to fit the same framework to these scenarios is likely to cause issues.  I will be writing a separate blog post that goes in to more detail around why taking different approaches for BAU and new project work is important but, to give an example, when you are building a new product or completely overhauling the architecture behind of the look and feel of an existing product, it is neither realistic or necessarily advisable to aim to deliver software to the customer at the end of every sprint so the value of using Scrum is more around mitigating risk and coping with changing requirements than delivering value to customers quickly and regularly.  Equally, when dealing with support, the goal is to always work on the highest priority item and deliver it to Production as quickly as possible so selecting items for a sprint, prioritising them and estimating them all serves little purpose and wastes time unnecessarily when the highest priority item may change more frequently than a standard sprint can cope with.  In this scenario, something like Kanban is more effective.

Finally, a quick note on Waterfall methodologies – a term that has sadly almost become a dirty word!  I am a fan of Agile.  When implemented well, I believe it can add a huge amount of value to an organisation but I also believe there is still a place for Waterfall methodologies in certain circumstances and organisations shouldn’t be discouraged from following them where those circumstances apply.  In environments where, for example, scope is generally fixed and unlikely to change (which is not as unusual as you may think – changes necessary to meet regulatory requirements for example), teams are not co-located, the business are not willing to actively engage in IT delivery etc., following a Waterfall methodology is likely to be far more successful than any implementation of Agile could be and if that is the case, then organisations should do that.

Posted in Agile

Oracle Fusion Middleware Continuous Delivery – Introduction

One of the main areas of interest in our consultancy practice is continuous delivery focusing on the Oracle Fusion Middleware stack. In this series of posts, I will demonstrate the key concepts of continuous delivery with examples for various Fusion Middleware components starting from the ground up.

There are many resources available on the internet – and in the Oracle documentation – that discuss these topics but many assume a working knowledge of one aspect or another. This series is different. We will work from the very basics and provide step by step examples for each component. The main tools we will use are:

Puppet – The examples will use the open source version of Puppet, but a future instalment will delve into the extra features of Puppet Enterprise and the benefits they bring.

Maven – We will use Maven for both build and deploy project management.

Nexus – Our examples will use Nexus as our Maven repository but the examples will work just as well with Artifactory and we will highlight any differences.

Jenkins – We use Jenkins for the execution of build and deploy tasks and will build a master / slave configuration so we can scale our process horizontally.

Git – All of our source code management is done with Git and we will show how to structure your branches using git flow to maintain a clear correlation between branches, builds and deployments.

Fusion Middleware – This is the good stuff! We automate the installation of fusion environments, the building of application components and deployment to our environments.

All of our server examples will use the Amazon EC2 cloud platform and we will address some of the issues that you may find while working with that environment. So now is a good time to get yourself an AWS account if you don’t already have one.

Before we get started, let’s define what continuous delivery means …

Elements of continuous delivery

For the purposes of this series, Continuous Delivery is based on the 8 main principles described in Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble and David Farley

The principles are:

  • Create a repeatable, reliable process for releasing software
  • Automate Almost Everything
  • Keep Everything in version control
  • If it hurts, do it more frequently and bring the pain forward
  • Build quality in
  • Done means released
  • Everyone is responsible for the delivery process
  • Continuously improve

These principles lead to serval practices, which are:

  • Only build your binaries once
  • Deploy the same way to every environment
  • Smoke-test your deployments
  • Deploy into a copy of production
  • Each change should propagate through the pipeline instantly
  • If any part of the pipeline fails, stop the line

These all seem fairly obvious and sensible but for many reasons, these are very difficult things to implement in most organisations.

The point of this series is to provide a “jump-start” to try to overcome the initial time / cost / anguish that organisations often face when implementing these practices for the first time. We extend these initial principles by going one step further and not only fully automate our build and delivery process, but also fully automate the provision of environments and the propagation of configuration from one environment to the next. It’s amazing how much time and money savings are realised with environment build automation.

So the first steps for our journey will be:

  • Provision a cloud based Puppet master
  • Provision a cloud based SOA Suite environment
    • Build an Oracle database node
    • Build a SOA / OSB Build node
  • Provision a cloud based CI environment
    • Build a Jenkins server
    • Build a Nexus server
    • Build a git server

Following on, we will automate the build and deploy of application code for the entire stack. It’s a big task but by the end of the series, we will touch on – pretty much – every aspect of Fusion middleware and automate almost everything.

Tagged with: , , , , , , ,
Posted in Continuous Delivery, Fusion Middleware, Git, Jenkins, Maven, Oracle, Oracle Database, OSB, Puppet, SOA, Weblogic

Installing Weblogic plugins for IIS 7.5

Over the past few years, I have had to use IIS as a front-end to OSB services in quite a few client sites (mainly because they already had IIS servers in their DMZ’s). It always seemed to be a bit of an annoyance to install the Weblogic Web server plugins and although the documentation is quite detailed, I’ve always run into one problem or another. Since I’v recently had to go through this process again, I thought it was worth a post to go through the step by step for those that haven’t done it before and also as a guide for those that may not have used the latest version of the plugins.

Which version?

The version 1.0 plugins ship with WebLogic and the documentation provided by Oracle and most – if not all – of the blog posts I’ve seen do a great job of describing how this installation works. The version 1.1 plugins, are available from OTN and provide – IMHO – a much better solution and a very simple installation. I haven’t seen any posts about this after a quick google. Perhaps it’s so easy now that a post isn’t required? Anyway, this walk though shows installing the 1.1 plugins for IIS 7.5, which is that latest version at the time of writing.

Basic Installation

For this simple walk-through, I will install the 32 bit version of the plugins for IIS 7.5 but the process is exactly the same for 64 bit installations. Also, this walkthrough assumes that all requests are forwarded to the OSB host (or cluster) and that we’re using HTTP rather than HTTPS.


As mentioned earlier, the 1.1 plugins are available for download from the OTN here

Otherwise, the only other requirement is a server with IIS 7.5 installed.

Unzip the Plugins

The downloaded archive should be unzipped to a local drive on the web server. I suggest C:\wlsPlugins but the choice is yours. The archive contains bin, jlib and lib directories for the platform. For this simple example, we will just configure an http connection to the OSB server so the only file we’re really interested in is the lib/iisproxy.dll (and the associated helper dll’s in the lib directory)

System PATH

Add the lib directory to the system path through the control panel. For our example it would be C:\wlsPlugins\lib

Create iisproxy.ini file

Create a file in the lib directory called iisproxy.ini to hold the configuration. for the most basic configuration, the following items are required:

WebLogicHost=host name
WebLogicPort=host port



Obviously choose one form or the other depending on whether you’re connecting to a cluster or a single host. Make sure you switch off the “hide extensions” non-sense so you don’t end up with a file called iisproxy.ini.txt by accident.

Configure IIS

Now comes the fun part! You can either use the default web site or create a new one. For this example we’ll create a new one …

IIS Manager

Add Mapping

Select the site a double-click the “Handler-Mappings”


Select “Add Script Map” from the right side menu. Enter * for the path and select the iisproxy.dll file extracted earlier. The name doesn’t matter too much – I chose “Proxy”.


Answer “Yes” to the subsequent dialogue.


All Done

That’s it! You should be able to connect to your web server and have all the requests sent to the OSB server. There should also be a lot of information in the specified log file. You will want to “turn down” the logging since there is a huge amount of debugging information recorded and also many more configuration items available for use in the .ini file; the documentation covers all these and should be consulted for more detail.

This covers the simplest possible configuration and I may cover more complicated scenarios if there’s an interest in the future.

Tagged with:
Posted in Weblogic

Introduction and Context

Hello and welcome to the SignaSol blog. Thanks for visiting! This post is just a quick introduction of who we are and the topics that we’ll present here.

SignaSol is the shorter (and much easier to type) name of Signature Interactive Solutions, a consultancy specialising in Enterprise integration with a focus on the Oracle Fusion Middleware stack. We are also experts in continuous integration, continuous delivery, devops … basically the mechanisms of ensuring development and delivery to production follow the same repeatable patterns.

As avid Unix enthusiasts – especially OSX – we develop iOS and OSX applications for internal use and  also for commercial clients.

we are also hugely enthusiastic about continuous integration and automated deployment of SOA suite applications and even more enthusiastic about distributed source code management with git and using all of these tools with JDeveloper and the OEPE eclipse.

The topics you’re likely to see here include:

  • Integration Architecture concepts and patterns
  • Oracle SOA Suite and Fusion Middleware implementation tips and techniques
  • Continuous integration practices for build and deploy
  • Being Agile in an enterprise environment
  • Random postings of Python code (Ron is a scary Python advocate)

I don’t expect that too many people will see this first post but at least it gives us a bit of an agenda and we’re all looking forward to providing some good information to you all.

Tagged with: , , , , ,
Posted in Uncategorized

Get every new post delivered to your Inbox.