Wednesday, June 8, 2016

Triggering a Client Cache Refresh

More and more web sites are using a single page style architecture.  This means there is a bunch of static resources (aka assets) including:
  • JS 
  • CSS
  • HTML templates
  • Images
  • ...
all residing in the client's browser.  For performance reasons, the static content is usually cached. And like all caches, eventually the data gets stale and the cache must be refreshed.  One excellent technique for achieving this in the web tier is Cache busting (see: here and here).
However, even using this excellent pattern there still needs to be some sort of trigger to indicate that things have gone out of date.  Some options:

HTML5 Sockets

In this case a server can send a request to any client to say go and get the new stuff.

Comet style polling

The Client consistently polls the server in the background asking if there is a new version available. When the server says there is, the client is triggered to start cache busting.

Both of these are good approaches but in some edge cases may not be available:
  1. Architecture may not allow HTML5 sockets - might be some secure banking web site that just doesn't like it.
  2. The Comet style polling might be too intensive and may just leave open edge cases where a critical update is needed but the polling thread is waiting to get fired. 

Client refresh notification pattern

Recently, on a project neither of the above approaches were available, so I needed something else which I shall now detail.  The solution was based on some existing concepts already in the architecture (which are useful for other things) and some new ones that needed to be introduced:
  1. The UI always knows the current version it is on.  This was already burnt into the UI as part of the build process.
  2. Every UI was already sending up the version it is on in a custom header. It was doing this for every request.  Note: this is very useful as it can make diagnosing problems easier. Someone seeing weird problems, nice to be able see their are on a old version straight away.
The following new concepts were then introduced:
  1. The server would now always store the latest client version it has received.  This is easy to do and again handy thing to have anyway.
  2. The server would then always add a custom header to every response to indicate the latest client version it has received.  Again, useful thing to have for anyone doing a bit of debugging with firebug or chrome web tools
  3. Some simple logic would then be in the client, so that when it saw a different version in response to the one it sent up, it knows there's a later version out there and that's the trigger to start cache busting!   This should be done in central place for example an Angular filter (if you are using Angular)
  4. Then as part of every release, just hit the server with latest client in the smoke testing. 
  5. As soon as any other client makes a request, it will be told that there is a later client version out there and it should start cache busting.

So the clients effectively tell each other about the latest version but without ever talking to each other it's all via the server.  It's like the mediator pattern. But it's still probably confusing. So let's take a look at a diagram.



With respect to diagram above: 
  • UI 1 has a later version (for example 1.5.1) of static assets than UI 2  (for example 1.5.0)
  • Server thinks the latest version static assets is 1.5.0
  • UI 1 then makes a request to the server and sends up the version of static assets it has e.g. 1.5.1
  • Server sees 1.5.1 is newer than 1.5.0 and then updates its latest client version variable to 1.5.1
  • UI 2 makes a request to the server and sends up the version of static assets it has which is 1.5.0
  • Server sends response to UI 2 with a response header saying that the latest client version is 1.5.1
  • UI 2 checks and sees that the response header client version is different to the version sent up and then starts busting the cache
Before the observant amongst you start saying this will never work in a real enterprise environment as you never have just one server (as in one JVM) you have several - true.  But then you just store the latest client version in a distributed cache (e.g. Infinispan) that they can all access. 

Note: In this example I am using two web clients and one back end server.  But note the exact same pattern could be used for back end micro-services that communicate with each other.  Basically anything where there is a range of distributed clients (they don't have to be web browsers) and caching of static resources is required this could be used. 

Until the next time, take care of yourselves. 


Tuesday, June 7, 2016

Why Agile can fail?

Most Development teams now claim they are doing Agile and are usually doing some variant of Scrum.  In this blog post I propose three reasons why teams can struggle with Scrum or any form Agile development. And conveniently they all begin with R.

Requirements

In the pre-agile days, requirements where usually very detailed and well thought out.  For example, when I worked in Ericsson, before we moved to the Agile methodology RUP, the projects were based on a classical Waterfall model that was a very high standard (usually around CMM level 3).  The requirements came from System experts who had lots of experience in the field, were very technically skilled and had incredible domain knowledge. Their full time job was to tease things out knowing that they had effectively only one chance to get it right.

In the Agile world, with shorter iteration cycles and much more releases there are chances to make something right when you get it wrong.  Functionality can be changed around much more easily.   This is good.   It is easier for customer collaboration and thus enable more opportunities to tweak, fine tune and get all stakeholders working together sculpting the best solution.

However, because the penalty for getting requirements wrong is not as great as it is in the Waterfall model it can mean that the level of detail and clarity in requirements can start becoming insufficient and before you know development time in the sprint gets wasted trying to figure what is actually required. The key is to get the balance right.  Enough detail so that there can be clear agreement across the dev team, customer and any other stakeholders about what is coming next, but not so much detail that people are just getting bogged down in paralysis analysis and forgetting that they are supposed to be shipping regularly.

I suggest one process for helping get the balance between speed and detail for your requirements in this blog post.

Releases

In the Agile methodology Scrum you are supposed to release at the end of every sprint.  This means instead of doing  1 - 4 releases a year you will be doing closer to 20 if not more.  If your release is painful your team will struggle with any form of Agile.  For example, say a full regression test, QA, bug fixing, build, deploy etc takes 10 days (including fixing any bugs found during smoke testing) it means that 20 * 10 = 200 man days are going to releases. Whereas in the old world,  with 4 release it would just be 4 * 10 = 40 days. In case it's not obvious, that's a little bit regressive.

Now, the simple maths tells us that a team with long release cycle (for whatever reason) will struggle releasing regularly and will thus struggle with any Agile methodology.

To mitigate this risk happening, it is vital that the team has superb CI with very high code coverage and works with a very strict CI culture.  This includes:
  • Ensuring developers are running full regressing tests on feature branches before merging into dev branches to minimise broken builds on dev branches
  • Fix any broken build as a priority
  • No checking into a broken build
  • Tests are written to a high quality - they need to be as maintainable as your source code
  • Coding culture where the code is written in a style so it is easily testable  (I'll cover this in a separate blog post)
  • CI needs to run fast.  No point having 10,000 tests, if they take 18 hours to run. Run tests in parallel if you have to.  If your project is so massive that it really does take 18 hours to run automated tests, you need to consider some decomposition. For example, a micro-service architecture where components are in smaller and more manageable parts that can be individually released and deployed in a lot less than 18 hours.
For more information on how to achieve great CI, see here and here.

By mastering automated testing, the release cycle will be greatly shortened.  The dev team should be aiming towards a continuos delivery model where in theory any check could be released if the CI says it is green.  Now, this all sounds simple, but it is not.  In practise you need skilled developers to write good code and good tests.  But the better you are at it, the easier you will be able to release, the easier you will be able to truly agile.

Note: One of the reasons why the micro-services architectural style has become popular is because it offers an opportunity to avoid big bang releases and instead only release what needs to be. That's true.  However, most projects are not big enough or complex enough to need this approach.  Don't just jump on a bandwagon, justify every major decision.

Roles 

The most popular agile methodology Scrum only defines 3 Roles:
  • Product Owner 
  • Scrum Master
  • Dev Team. 
That's it.  But wait sec! No Tech Lead, no Architect, no QA, no Dev manager - surely you need some of these on a project.
Of course you do. But this can be often forgotten.  Scrum is a mechanism to help manage a project, it is not a mechanism to drive quality
engineering, quality architecture and minimise technical debt.  Using Scrum is only one aspect of your process.  While your Scrum Master might governs process they don't have to govern architecture or engineering.   They may not have a clue about such matters.  If all the techies are just doing their own stories trying to complete them before the next show and tell and no-one is looking at the big picture, the project will quickly turn into an unmanageable ball of spaghetti.

This is a classical mistake at the beginning of a project. Because at the beginning there is no Tech debt. There are also no features and no bugs and of course all of this is because there is no code! But, the key point is that there is no tech debt. Everyone starts firing away at stories and there is an illusion of rapid progress but if no-one is looking at the overall big picture, the architecture, the tech debt, the application of patterns or lack of, after a few happy sprints the software entropy will very quickly explode.  Meaning that all that super high productivity in the first few weeks of the progress will quickly disappear.

To mitigate this,  I think someone technical has to back away from the coding (especially the critical path) and focus on the architecture, the technical leading, enforcing code quality. This will help ensure good design and architecture decisions are made and non-functional targets of the system are not just well defined but are met.  It is not always a glamorous job but if it ain't done, complexity will creep in and soon make everything from simple bug fixing, giving estimates, delivering new features all much harder than they should be.

In the Scrum world it is a fallacy to think every technical person must be doing a story and nothing else.  It is the equivalent of saying that everyone building a house has to be laying a brick and then wondering why the house is never finished because when the bricks never seem to line up.



Someone has to stand back ensure that people's individual work is all coming together, the tech debt is kept at acceptable levels and any technical risks are quickly mitigated.

Until the next time, take care of yourselves.




Thursday, June 2, 2016

Agile Databases

Any project following an Agile methodology will usually find itself releasing to production at least 15 - 20 times per year. Even if only half of these releases involve database changes, that's 10 changes to production databases so you need a good lean process to ensure you get a good paper trail but at the same time you don't want something that that will slow you just unduly down. So, some tips in this regard:

Tip 1: Introduce a DB Log table

Use a DB Log table to capture every script run, who ran it, when it was run, what ticket it was associated with etc. Here is an example DDL for such a table for PostGres:
create sequence db_log_id_seq;
create table db_log (id int8 not null DEFAULT nextval('db_log_id_seq'), created timestamp not null,  db_owner varchar(255), db_user varchar(255), project_version varchar(255), script_link varchar(255), jira varchar(255));
W.R.T. the table columns:
  • id - primary key for table. 
  • timestamp - the time the script was run. This is useful.  Believe me. 
  • db_owner - the user who executed the script. 
  • db_user - the user who wrote the script 
  • project_version_number - the version of your application / project the script was generated in.
  • scrip_link - a URL link to a source controlled version of the script 
  • jira - a URL to the ticket associated with the script. 

Tip 2: All Scripts should be Transactional

For every script, make sure it happens within a transaction and within the transaction make sure there is an appropriate entry into the db log table. For example, here is a script which removes a column
BEGIN;
ALTER TABLE security.platform_session DROP COLUMN IF EXISTS ttl;
INSERT INTO db_log (
       db_owner, db_user, project_version, script_link, jira, created)
VALUES (
       current_user,
       'alexstaveley',
       '1.1.4',
       'http://ldntools/labs/cp/blob/master/platform/scripts/db/updates/1.1.4/CP-643.sql',
       'CP-643',
       current_timestamp
);
COMMIT;

Tip 3: Scripts should be Idempotent

Try to make the scripts idempotent. If you have 10 developers on a team, every now and again someone will run a script twice by accident. Your db_log will tell you this, but try to ensure that when accidents happen that there is no serious damage. This means you get a simple fail safe,  rather than some newbie freaking out.   In the above script, if it is run twice the answer will be the exact same.

Tip 4: Source Control your Schema

Source control a master DDL for the entire project. This is updated anytime the schema changes. Meaning you have update scripts and a complete master script containing the DDL for entire project. The master script is run at the beginning of every CI, meaning that:
  • Your CI always starts with a clean database 
  • If a developer forgets to upgrade the master script, the CI will fail and your team will quickly know the master script needs to be updated.
  • When you have a master script it gives you two clear advantages: 
    • New developers get up and running with a clean database very quickly
    • It becomes very easy to provision new environments. Just run the master script! 

Tip 5: Be Dev Friendly

Make it easy for developers to generate the master script. Otherwise when the heat is on, it won't get done.

Tip 6: Upgrade and Revert

For every upgrade script write a corresponding revert script. Something unexpected happens in production, you gotta be able to reverse the truck back out!
BEGIN;

ALTER TABLE security.platform_session ADD COLUMN hard_ttl INT4;
UPDATE security.platform_session  SET hard_ttl = -1 WHERE hard_ttl IS NULL;
ALTER TABLE security.platform_session ALTER COLUMN hard_ttl SET NOT NULL;

ALTER TABLE security.platform_session ADD COLUMN ttl INT4;
UPDATE security.platform_session  SET ttl = -1 WHERE ttl IS NULL;
ALTER TABLE security.platform_session ALTER COLUMN ttl SET NOT NULL;


INSERT INTO db_log (
       db_owner, db_user, platform_version, script_link, jira, created)
       values (
       current_user,
       'alexstaveley',
       '1.1.4',
       'http://ldntools/labs/cp/blob/master/platform/scripts/db/reverts/1.1.4/revert-CP-463.sql',
       'CP-463',
       current_timestamp
    );

COMMIT;

Until the next time take care of yourselves.

Thursday, May 19, 2016

Immutable pointers - a pattern for modular design to help achieve microservices

Modular design has a lot of benefits, including:
  • making it easier to predict the impacts from a change
  • helping developers work in parallel
But, it is much much much harder than people think.  For a start, the abstractions have to be at the right level.  Too high and they can become meaningless and too low and they cease to be abstractions, as they will end up having way too many dependencies (efferent and afferent)

In a recent green field project, which was in the area of digital commerce, I was intent on achieving good modular design in the back end for the reasons outlined above and because the project had potential to grow into a platform (more than likely microservices) that would be used by multiple teams. To achieve the modular design, after some white boarding the back end was split up into a bunch of conceptual components to achieve shopping functionality.  The core components are listed below.


  • Shopping component - Core shopping functionality e.g. Shopping Cart management
  • User component - user management, names, addresses etc
  • Purchases - details about previous transactions
  • Merchant - Gateway to merchant API to get information about merchant products etc. 
  • Favourites - Similar to an Amazon wishlist
Now, every software project uses the word "component" differently. In this project, a component was strictly defined as something that did something useful and contained:
  • domain classes 
  • services
  • a database schema  (could be its own database, but the idea was to isolate it from any other components persistence) 
  • its own configuration
  • its own dedicated tests
  • its own exception codes domain 
The outside world could access a component via a bunch of ReSTful endpoints. 

Any component could be individually packaged, deployed etc.  Now, the astute out there will 
be thinking,  "this sounds like microservices", well they almost were.  For this project, 
some of them were co-located but they were architected so that deploying them out into 
individual  deployed artefacts (and hence a microservices approach would be easy). 

Ok, to reiterate, the goal was achieve a very clean modular design.  This meant, that I didn't want 
any dependencies from one component's database scheme to another and for this blog post we are only going to focus on how that aspect of the modularity was achieved. 

Now, looking at the above components, it doesn't take long to see that isn't going to be so easy.  For example: 
  • A shopping cart (in the Shopping component) will have a reference to a User (in the User component)
  • A shopping cart item (Shopping component) will have a reference to a Product (Merchant component)
  • A Shopping cart (Shopping component) will have a reference to a shipping Address (User component

So the challenge of achieving modularity in the persistence tier should now be becoming more clear. References of some sort across components need to be persisted.  Immediately, any developer will ask, 
"Wait a sec, if we just use foreign keys for these inter schema references we get ACID and referential 
integrity for free!"  True. But then you are loosing modularity and introducing coupling.  Say you want to move your products (in the Merchant Component) away from a relational database and use something like Elastic or Mongo DB instead - to leverage their searching capabilities.  That foreign key isn't so useful now is it?  

Ok, so first of all in looking for a solution here, I thought about all the references that were across 
components to see if there was anything in common with them.  One thing that was obvious was that 
they were generally all immutable in nature.  For example:
  • When a Cart Item (Shopping component) points to a Product (Merchant component) it points to that product only.  It never changes to point to another product.  
  • When a Shopping Cart (Shopping component) points to a User (User component), it is also immutable.  My shopping cart is always mine, it never changes to be someone else's.
So I was now starting to think about preferences:
  1. Avoid cross component dependencies if you can (this should be kinda obvious)
  2. If you have to have them, strive for immutable references.
So, next up was to have a name for this type of relationship - which I was calling "Immutable pointer" in design and architecture documents. But, for actual code I needed something more succint. The database schema was already using "id" for primary keys and "{name_other_relationship}_id" for foreign keys. So I decided all cross component relationships were named as the the name of the entity being pointed to and "ref".  

So, some concrete examples:
  • userRef   (ShoppingCart pointing to the user)
  • productRef (CartItem pointing to the product)
  • shippingAddressRef (ShoppingCart pointing to the ShippingAddress)
This meant anytime anyone saw something like "xyzRef" in code, schema or logfiles they knew it was a cross component reference. In case it wasn't obvious, Ref was an abbreviation for Reference.

Next up was to decide on the format for the actual Refs.  I took a bit of inspirational from that thing call the internet, which of course has similar concepts where abstractions: web sites of web pages contain immutable pointers to other abstractions: hyperlinks to web pages in other web sites.


So similar to hyperlinks and URLs, the refs would follow a hierarchical name spacing format.
Some good team input from senior technical member suggested to continue the inspiration from the Web and start the hierarchical names with cp://.  CP for Commerce Platform the name of the project. This was analogous to http://  I thought this was good idea as it indicates that our platform generated the reference.  Again, this meant they stood out in logfiles etc and could be differentiated from any downstream components also using hierarchical type references but in a different context. 

The key point of the ref was that when generated it should of course be unique.  To achieve this 
a mixture of database primary keys or something unique about the data (e.g. product skus were used)

So some examples: 
  • userRef -> cp://user/{uuid of user}
  • productRef -> cp://merchant/{uuid of merchant}/product/{sku of product}
  • cardRef -> cp://user/{uuid of user}/card/{uuid of card}

Always immutable?

As stated, the first preference was to avoid the cross component reference. The second preference
was to use the immutable pointer (ref pattern). However, what about an edge case where the cross component reference could be mutable.  Could this happen?  Well it could.  Easiest to explain by an example.   

Every shopping cart doesn't just have a User, it also has a selected shipping address where the contents purchased will be shipped to. In the domain model, the user's address lived in the User component.  But unlike the other cross component references the shipping address could change. Consider your Amazon shopping cart.  Imagine you are on the checkout screen with your selected card and your selected address, but before you proceed to checkout you go into your user preferences and delete your card and address.  This project had to facilitate similar scenarios.  So the User deletes the address that a shopping card id pointing to what should happen?

So for inspiration for solution here, we can look to the various NoSQL patterns and in particular 
one of the most popular is eventually consistency. What this says is that unlike ACID you don't 
always need consistency straight away, all the time.  In certain cases it is okay to allow inconsistency
on the basis that the system is able to reconcile itself. 

So in this case:
  1. The shopping cart is pointing to a specific shopping address using an addressRef. 
  2. The user deletes that address by hitting a ReST endpoint in the user component.
  3. This means the shopping cart will point to an address that doesn't exist. The system is inconsistent.
  4. The next time the user reads the shopping cart, in the request handling the shopping component asks the user component if the address with this address ref sill exists and if it doesn't removes the pointer.  
  5. This system is now consistent
So with the architect hat on it is really important we get this all right. Otherwise the goal 
of modular design falls apart. 

In this case, it is worth reiterating the strategy one more time:
  1. Avoid cross component references.  Doesn't matter how great your pattern is, if you have a lot of cross component references,  it is more than likely you have got the component abstractions at too low a level.
  2. Favour immutability.  In general immutability means less code paths, less edge cases, less complexity in code. 
  3. Eventual consistency.  
Software Architecture is about trade offs, and finding the right balance.  In this case, it was the balance between clean modular design but not going over board with it so that it become impossible to achieve. 

For anyone trying to do microservices, I would strongly recommend trying to master how you would do modular design in your architecture first.  If you can't do this, when you add in the complexity of the network things get very complicated. 

  

Monday, May 9, 2016

Requirements in an Agile World


On a recent project, Stories with unclear requirements were ending up with developers. It was a pain point. Time was spent trying to clarify missing pieces meaning less time to write great code and meaning  developers ended up under even more pressure to get things done.

Now, there are many reasons why Development teams may struggle to adopt to a well structured agile process (whether it be Scrum, Kanban, XP or whatever).  I think one reason is when requirements start going into Sprints that aren't detailed enough. They lack sufficient specification.  This, of course, then creates further problems, including: bad estimations, drop in engineering quality and the correct functionality just not getting delivered. If this sounds in any way at all familiar, please read on.

In our case, we did have a development process which is best summarised by the Sprint board swim-lanes we were using:
  • Ready for development - Story was in sprint but no-one had started it. 
  • In Progress - developer working on story 
  • Code Review - 2 people usually code reviewing. We were good at this. 
  • UX review - UX or Product person would look at output to confirm everyone on same page.
  • Done - Story merged, finished testing and ready to be released. 
The unclear requirements problem was happening because the process didn't vet stories sufficiently before they were passed over to Developers. I decided to attempt to fix this by introducing a new process for the team specifically for capturing requirements.

Some goals I had for this process:
  • It should be JIRA centric. Developers, analysts, product owners, leads, should be able to get everything they needed from the one tool which in our case was JIRA.  Time to cut down on Excel, Word, random emails, meeting notes on someone's laptop as much as possible and put all the important information in one shared place.  It doesn't necessarily have to be JIRA, could be Asana, Trello or any Agile friendly project management tool that your team are using.
  • The stories had to be vetted. Strict criteria had to be met before stories could be handed over to Developers. The vetting needed to be done by key technical people. It was fine someone with a non-technical background bringing forward an idea - they are usually the people to do this as they are closer to the customers. But whatever the idea it had to be technically feasible. And if it needed to be tweaked to make it so, it was better that happened as early as possible. 
  • Key stakeholders had to come in at key stages in requirements capturing. For example, UX expertise for UX design. 
  • Allow people work in an asynchronous manner. The reality was story evolution required different key people at different stages. The process needed to facilitate this. 

In the beginning...

So, not being a JIRA expert, I first created a JIRA project that I could play around without impacting the existing project. I then set about attempting to create a new workflow for the requirements capturing process.

JIRA statuses

Ok so every workflow involves moving an issue through various statuses.  I had a look at the existing JIRA issue statuses (that came with JIRA out of the box) both for inspiration and because I wanted to use existing industry norms as much as possible. So after having a quick look, I came up with some statuses I thought made sense:
  • Idea - this would represent just a brain dump of someone's idea. Could be just a sentence, a paragraph or heck even just a picture! 
  • Story Definition - at this state, the expectation would be that the acceptance criteria and the usual story stuff would be there. 
  • UX Design - screen shots, wire frames etc would be added by someone with UX expertise which would guide the Developer on how things should look. 
  • Technical Review - this step would be when the serious vetting was done. If there was not enough information Technical Review would fail and the issue would be sent back to Story Definition state for further work.  Two other important functions of this step:
    1. It provided an opportunity to provide architectural advice if it was needed 
    2. It prompted for an initial estimate for the work. Idea was to really capture if this was a small, medium or large piece of work.  The estimate could always be refined.
  • Ready for development - this step would indicate that the technical review had passed and the story was ready to go into a sprint and be taken by a Developer.

The observant amongst you will note that the last step in the requirements capturing process is the first in the developer process. This was deliberate to emphasis the separation of flows and the sense of continuity when the issue ended up in a sprint with a Developer.

Workflow 

So next up was to put all those states into some sort of workflow. This is best explained by the JIRA workflow diagram below. 

Requirements workflow
With respect to the above diagram, the following are the key points: 
  1. The general direction is: Idea -> Story Definition -> UX design -> Technical Review -> Ready for Development. 
  2. It is possible for the issue to go forwards and backwards between several states. For example, it is possible to be in Technical Review and then go back to Story Definition, back to Technical Review and then back to Story Definition. The idea here is not only have some vetting before a story goes to Ready for development, but so that an analyst could gather some technical questions regarding feasibility and get them answered to help them evolve their story definition.  There might be an element of to and fro between key stakeholder and the workflow captured this.   It it worth re-emphasising that the Technical Review and Story Definition stages are usually handled by two different people; a technical expert for the former and a product expert for the latter.  The process must help them work together. 
  3. I added an On Hold and a Closed stage. On Hold was for stories that just became less important but not irrelevant. The irrelevant were just Closed. The advantage of the Closed stage as opposed to just deleting them was that you had a record of the work that was done and the reason why you didn't proceed with the original idea. 

New fields

Next up was to add more fields. Some of these came from feedback from the team. This was good. The more people who contributed to the process the more buy in, the more likely the new process was going to work. 
  • Business Driver - idea here was to capture why we were doing this JIRA? Was it an internal idea, coming from a customer or what? Important that everyone knew this, Analyst, Developer, Lead etc. 
  • Acceptance criteria - a logical place to formally specify the acceptance criteria. 
  • Technical review - a logical place to detail the technical review. This would include architectural advise on how the Developer should approach the story. 

Checkpoints

As stated, the plan for this process was to allow stories evolve from a simple idea into something that was formally agreed, that could just be implemented by a Developer. To facilitate this evolution, I made key fields compulsory out at key stages. 
  • Acceptance Criteria had to filled out before something could be put in technical review 
  • Story points had to be filled out before the story could leave Technical Review. 
To do this in JIRA: 
    1. Select to edit the workflow
    2. Select the relevant transition and then the view validators option.
    3. Select validators tab
    4. Select to add a new validator
    5. Select "required fields".  Select the fields you want completed before the transition can happen

Could the right stakeholder please stand up?

As stated, a goal of this process was to bring in the right stakeholder at the right time.  More specifically:
  • when the JIRA made it to UX Design it would be assigned to the team designer
  • when the JIRA made it to Technical Review  - it was assigned to the technical lead. I played this role and then would either do the technical review or assign it to another senior technical person on the team. 
  • If the issue didn't pass Technical Review and instead went back to Story Definition it was reassigned to the the issue reporter. 
To enable the auto assign in JIRA, I added a "post function" to specific steps to auto assign the JIRA issue to correct stakeholder.  This feature is available off any transition in the workflow.  Just pick the transition where you want to auto assign. 

Stories, New Features or Improvements?

JIRA, out of the box, gives the option to create many types of issues amongst them: Stories, New Features, Improvements. The problem is that there is an element of subjectivity here.  Implementing something like "Allow the user reset their password" could be a Story to one person, a New Feature to someone else and even an Improvement to the person sitting next to them.  I decided for this process, that something starts off as a "New Feature" or  an "Idea" only. Then, it must go through the new requirements process and when done will be moved to either a Story or Epic. 

This meant, that it made it easier to distinguish whether something was in Dev or still in a requirements capturing phase.  When someone is searching issues in JIRA, when they see a Story or an Epic, they immediately know that it has gone through the Requirements gathering process, it has been vetted  and can then be confident it is really ready for development.  This is really useful for backlog grooming which I think always works better when it is just a scheduling process rather than a debate about requirements that is open to the floor. A backlog grooming session that turns into a deep discussion about what should happen can just mean the entire dev team at a very long talk shop.

Boards 

As stated, we already had a Developer sprint board. This board worked well. We used it on the big screen at daily stand ups. I wanted something similar for the requirements capturing process. I decided against changing the existing dev board, for the following reasons: 
  • It would would have way too many swimlanes (min 10) and difficult to even see on our massive widescreen. 
  • The dev process and requirements process are really for two different sets of people. 
  • The dev board was a scrum board. While our dev team was a mixture of Scrum and Kanban, we could use a scrum board, but there was no way the requirements team could.  At the end of the sprint, everything was in the right most swimlane and the release was ready to go.  It was nice to just have that board focussed on development with the end goal of releasing.  Rather, than cluttering it with requirements that could evolve at a different pace.  
So with that in mind, I created a new board specifically for the requirements gathering process.  With the following swimlanes:
Requirements swim lanes.
To avoid createing sprints for this board (they would be pointless), I made this a Kanban board. 

Filters

I added filters for key people on the Product side of the team. They were the individuals who would be driving most of the requirements.  This was to help them track their own work and show what they were doing to the rest of the team.

Priorities

I toyed with the idea of another field to give the JIRA issue a priority. However, why re-invent the wheel. JIRA already had a priority field which had 5 different levels (in order):
  • Blocker
  • Critical
  • Major
  • Minor 
  • Trivial
So, thought, let's just use this and if we need a 6th we can add.  Five priority levels should really be enough though. To encourage people to use them, I colour coded the levels and made their colours display on the JIRA Board.

This meant any time there was a discussion about all the stories, the priorities were obvious. Things could be reprioritised easily.  The colour coding can be configured on the Agile board.   I got some great feedback on what the various colours should be from the team!  Naturally, I updated the existing Developer board to use the same scheme.

Meetings

Every process needs meetings.  The requirements capturing process has three types of meetings:
  • Key Technical Leads and Product Analysts met once a week.  We would run through the entire requirements board, spending two mins on each issue.  I used this opportunity to make sure nothing was blocked, we were in agreement on priority and everyone was buying into the process.   It is important to emphasis this is not backlog grooming session.  All requirements must go through this process first before they are ever discussed in backlog grooming.   
  • Product Analysts met amongst themselves.  This meeting didn't involve me but it gave the Product side a chance to delve more into stories,  exchange ideas and do important work without having to take away time from Technical people. 
  • Product analyst and Developer for Story.  Usually, each issue ended up with one Product analyst and one Technical stakeholder who would end up developing the feature.  They could met 1-1 whenever they saw fit. This was to thrash out requirements, tease out edge cases, you name it.  The idea here was to give the developer a sense of ownership of the feature.  They were involved from an early stage and would responsible for delivery of the new feature.  How they met, how often they met was up to them. 

My Role

For this project I was a mixture between a Tech Architect, a Tech Lead and also coded.  To do the Tech Architect and Tech Lead work, I generally avoided stories on the critical path and didn't code as much as some of the other developers on the team.  I initially was going to do all the Technical Reviews.  I decided against this as it just doesn't scale and it would slow things down.  I also wanted to get developers involved early and give them a sense of ownership.  The team had strong developers who wanted full stack development.  The best way to achieve that is to give them a feature to own and then just provide the architectural advice when needed.  The architectural advice would be on the lines of:

  • Discussion and agreement of changes to key interfaces, ReST endpoints or Database schema
  • Patterns
  • Best practises
  • Technical risk assessment and mitigation

When the issues made it to Technical Review, they were auto assigned to me in JIRA and then I would delegate them out as much as possible and then just zoom in and out as appropriate.

Ah more advantages

The other advantage of this process was that I could see what might be coming up in the next 6 months.  Product Analysts would continuously add ideas for things that might or might not evolve into an actual requirement depending on customer needs etc.  This meant, that I could be confident that the existing architecture was safe with respect to what might be coming for next 6 months.  So for example, if I saw a lot of ideas coming in around different types of Searching, it would mean I would keep something like Elastic Search on the Tech Radar - as in this might become more relevant very quickly at some stage.   It would also help strike a good balance in technical decisions and avoid over engineering for scenarios that were extremely unlikely.  

Final words

Anyway, that's about it.   I believe no matter what you are doing, to get the best results you need a good structure.  In software, that means you need a good process. Everyone needs to know what they need to be doing, what the priority is and the structure should make it as easy as possible for the team (which is always going to be a mixture of skills and backgrounds) to work together to deliver a top quality product. 



Sunday, February 7, 2016

Book Review: Building Microservices

The architectural idea of microservices was inspired - in part - by Unix's philosophy of code being short, simple, clear, modular, extendable and that it could be repurposed by developers. The term is currently up there with Internet of Things, Big Data and the Cloud, in contemporary technical lexicon.

Author Sam Newman is an industry expert on the subject. He has:
written for Infoq, presented at Javazone and various other events regarding microservices.

His book 'Building Microservices' does an excellent job introducing the key concepts of microservices:
  • services are autonomous, live on their machines
  • resilience - one service fails, it doesn't impact other services
  • scaling - because services are independent they can be independently scaled
  • deployment - deployment should be easy. A change to a service means that's all that's deployed.
It is worth pointing out it isn't just a book about microservices. Many of the ideas and best practises detailed (for example HATOES in your ReST approach and to check OWASP for security references) are equally applicable to non - microservices architectures. But, that said, what I really like about this book is anytime a general architectural or software engineer concept (and there are a lot) is explained it is explained very well.  Some examples:
There are - of course - many tips regarding how you approach microservices. For example:
  • Don't get too hung up on DRY. It may make it harder to keep your services independent.
  • Consider using CDC's for your testing approach
  • Canary releasing / Blue, Green testing for releasing.
  • Using bulkhead and circuit breaker patterns to make services ore resilient.
As an overview to microservices, it's a great book. However, the reader must bear in mind there is no one size fits all approach to microservices. The finer details will depend on your project, your team and even a bit of trial and error. It's difficult to critique this book. So instead, I'll just flag some concerns not about the actual book but about microservices in general and how the industry and developers are reacting to them.

My first concern with microservices is more a practical than a technical one. Doing modular design for everything from your schema, service layer, end points, configuration isn't as easy as some people might think. Especially in teams of varied skill level, varied backgrounds and inevitable commercial pressure that happens in every project. It requires a lot of technical aptitude, leadership and discipline. If a project is not good at achieving modular design in a monolith - for whatever reason - I think it will really struggle at modular design in microservices when the complexity of the network has to be also considered.

The second concern I'd have is that when explaining the principle ideas of microservices, it is often compared with a monolith to the point the word monolith is a pejorative term.  The two approaches: monolith and microservices are presented as if it's either one or the other.  I think this a false dichotomy fallacy. There are other approaches available.

Thirdly, microservices are no doubt,  a clever idea.  But that doesn't mean there are a panacea. In some projects they will be a good fit, in others they will not be worth it. One obvious factor to bear in mind is your non-functional requirements.  One useful point of reference,  that is worth considering is the project James Lewis (another Thoughtworks guru) described in his talk about microservices in 2012.  In this presentation, three non-functional requirements for the project caught my attention:
  • One component had to handle 1,000 TPS
  • Another had to support a user base of 100 million users
  • Another had to support batch loads of 30 - 90 million records
While I am not saying you should be in this ballpark before considering microservices, I am just trying to suggest that most projects don't have these sort of demands and there's a lot of merit of considering something like a very well structured modular monolith first using DDD principles and then to migrate towards microservices should the benefits justify it. This strategy is well explained by Martin Fowler in his Monolith first article.

Some other references:


Until the next time enjoy yourselves.


Fail Safe / Fail Fast

When developing a rapid prototype it can make sense to put the emphasis on the 'happy path' and not consider things like exception handling, edge cases and failure.  Perhaps, once the prototype phase of the project is over the code will be thrown out or it will be refactored to deal with the real world.

In a production system we simply don't have luxury to only consider the happy path.  The more production system I have worked on the more types of failure I have being exposed to - some of it very painful. As my hair went greyer from these experiences, I couldn't help thinking more and more about how to make failure less painful. Who wants their pager going off for some silly reason? Nobody. If dealing with production code, we simply must think about how best to deal with various forms of failure.

In this presentation I consider two engineering techniques that I think should always be on the architectural radar:
* Fail fast
* Fail safe


Sunday, January 24, 2016

Developer Productivity

Recently I gave a presentation regarding developer productivity - something that has always interested me.  Generally, when a system gets out of control with spiralling technical debt and software entropy it becomes impossible to make changes or add new functionality to that system in a reasonable and predictive way.   At a critical level, this can be the difference between project success and project failure.  If processes are smart, good structures are in place, developers are productive not burning out from late hours debugging crazy code, terrified of the impacts of any change.

Everyone wants a path to project delivery that is smoother.  No matter what the job or the task it is generally far easier to get a sense of satisfaction from a sense of productivity.   But let's face it; software entropy exists in nearly every project.  Why?  Are we not smart enough to use the machines we designed?  Do commercial realities inevitably mean bad code happens more than good code?

These are questions that spark lively discussions.  In this presentation, I outline some of my own ideas on developer productivity.