– High value position
– Great salary
– Visible role
– Lots of interactions
– Safe from outsourcing
– Difficult to stay up to date
– Difficylt to get right
– Can receive bad requirements
– First in line to receive blame
– Responsible for Technology
– Converts Functional Requirements to Technical Architecture
– Carefull balances Patterns/Requirements/Elegance/Concepts
– Researches Key Technologies
– Has deep understanding of Design Patterns
– Motivates and guides development team
– Ensures that the Lead Developer is successful
I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess. A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”.
(A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.)
In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ :
Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it. An application with such spaghetti becomes nearly impossible to replace. Estimates of change impact to the application are frequently wrong by orders of magnitude. Interruption in the integration functioning are always a major disaster – both in terms of the time and people required to resolve it and in the business impact of it.
Even as the spaghetti bound application is nearly impossible to replace, as the current state continues it continues to grow worse as additional connections are made to these key applications and derivative copies of the data are taken from it, or clones created to avoid it (and thereby creating another synchronization and connection point).
Such spaghetti takes multiple forms but often involves ALL forms with multiple generations of technology connections, including excessive point to point connections, tightly coupled connection technologies, database triggers, business logic embedded in EAI process steps, many batches in and out from and to many destinations, ETL loads and extracts to/from other databases, multiple services providing nearly (but not exactly) identical data sets, and the involvement of many message queues.
Anything is done to avoid dealing with the giant plate of spaghetti.
Systems will integrate with systems that integrate with it, piggybacking existing connectivity and putting a burden on the subsidiary system, to avoid directly connecting into the spaghetti. They’ll go to a secondary or tertiary data source to avoid going direct. Everyone knows avoid the spaghetti if at all possible and will spend double to triple the integration effort to do so.
If the primary system is replaced, it’s not unusual that the new system won’t be integrated into all the old connections – this would require actually understanding each existing connecting, extracting it and redirecting/reconnecting it to the new system – rather the OLD SYSTEM will stay around to act as the connection point for all the existing spaghetti connections and the new system will become an integration, taking data feeds or a regular ETL load, off the old system! Meaning the old system lives forever!
Does this problem every get resolved? Yes. When the other side of the connections gets replaced, the new systems on that side will be integrated with what replaced the core spaghetti bound system. If the IT shop is lucky after a generation or so the spaghetti bound system can be shut down.
Unfortunately in major Enterprise IT shops finding some spaghetti integrations is not unusual. IT management is loathe to acknowledge such a problem to the business and will continue work-arounds until it directly impacts business goals. Otherwise it remains just another hidden IT enterprise IT expense.
Published on 26 October 2013 by @mathiasverraes
I’m a big proponent of pre-merge code reviews. From my experience consulting for teams in problematic projects, I can say that (along with daily standup meetings) pre-merge code reviews are one of the most effective and yet fairly easy changes a team can introduce to radically improve the condition of the project. And even if you consider your project to be healthy, there’s always room for improvement.
(Note that these rules are starting points. Figure out what works in your team, adapt continuously.)
Having multiple sets of eyes review a pull request before it gets merged to master or an integration branch, is a great way to catch defects early. At this time, they are usually still cheap to fix.
There are however much more important benefits. Instead of individual developers, the team is responsible for the internal and external quality of the code. This is a great remedy against the blame culture that is still present in many organisations. Managers or team members can no longer point fingers to an individual for not delivering a feature within the expected quality range. You tend to become happier and more productive, knowing that the team has your back. You can afford to make a mistake; someone will find it quickly.
Another effect is something called ‘swarming’ in Kanban. Because you are encouraged to help out on other branches before starting your own, you start to help others finishing work in progress. Stories are finished faster, and there’s a better flow throughout the system. Especially when stories are difficult, or when stories block other stories, it’s liberating to have people come and help you to get it done.
And of course, there’s all the benefits from the clear sense of code co-ownership. It’s invaluable to have a team where everybody knows what the code does, why it’s designed that way, how everything fits together. It also reduces the Bus Factor: no single team member is a bottleneck. Best practices are shared, and the code is more consistent. Opportunities for reuse are spotted before lots of duplication happens.
In short, pre-merge code reviews grow the team’s maturity.
Reading code is hard, much harder than writing it. Here are some ideas that I have found to make things easier.
I’m assuming in this post that you use GitHub, but there are other solutions, such as ReviewBoard, BarKeep, Phabricator, and Gerrit. I did a little bit of research last year, and felt that GitHub was the best tool, but I have no hands-on experience with the others. YMMV. The important factor for me is that you can review stories or branches as a whole, and comment inline. (Update: @tvlooy suggested GitLab.)
As usual, I will keep adding to this post as I find more patterns. Got tips?
Update: Apparently Phil Haack blogged about code reviews almost at the exact same time. He has some great tips like keeping a checklist, focusing on the code and not the author, and stepping through the code. Worth reading!