SDLC stands for Software Development Life Cycle. Contrary to popular belief, SDLC is not a framework or even a described process. Although it can be defined as a conceptual model used to represent how software is made in a series of steps. Those steps cover phases from the ideation phase to delivery, described as follows:
1. Planning;
2. Analysis/Requirements;
3. Design and prototyping;
4. Software development;
5. Testing;
6. Deployment;
7. Operations and maintenance
Regardless of the methodology or framework of development your team uses, it should cover with more or less detail all the steps present in the SDLC. For example, a waterfall approach would follow each step as a specific phase in the project, being the end of each of one, a phase gate/milestone in terms of the project's progress. An Agile methodology would normally "condense" all those steps into repetitive, cyclical and iterative chunks, covering all those steps for each iteration.
The concept of Agile in software development has been around for decades. The lack of malleability, common heavyweight-processes and resistance to change, frequent in the industry until the end-90s with Waterfall oriented projects, were confronted with a new direction. That was when Agile methodologies (didn’t even have this name back then), like Scrum, XP, Crystal, Feature-Driven Development (FDD), and Dynamic Systems Development Method (DSDM), among others, began to appear.
With ideas that cover different aspects from: welcome changing requirements; deliver software frequently; close synergy between business people and development team; as well as the reflection on how to improve, brings light to a new way of creating software. Besides any buzz or trends that come and go, the real benefit that Agile proposes is to address known issues commonly faced in the software development world in a different perspective.
Instead of following the overused path of just covering the four values present in the Agile Manifesto, our approach will be to talk about Agile’s principles and best practices. They are often side-looked, although they represent even more details of the mindset that Agile should bring.
A "practice" can be defined as “the actual application or use of an idea, belief, or method, as opposed to theories relating to it”. This definition represents clearly what Agile practices are: a way to apply the theory behind the actual concept of what being Agile means.
Agile practices can even be used without following a given Agile methodology - i.e. using TDD (Test Driven Development) alone won’t make your delivery or process completely Agile per se. It is relevant to explain that most Agile practices are called that because they either emerged from an Agile methodology or were created by Agile practitioners.
Different Agile methodologies encourage different Agile practices, to make them more objective and productive.
Each of these practices will generally focus on one specific aspect, such as management, development, testing, etc.
Without further ado, we present below a list of Agile practices that can be applied in different steps of your SDLC (software development life cycle).
Bear in mind that some of the practices here presented may be possible to be applied once or more times, depending on their goals. Some methods happen to cover in more depth one step of the SDLC than others. Nevertheless, it may be the case that a given practice covers fully one aspect and partially another stage. The rule of thumb has always been: less focus on being strict, and more focus on achieving the desired results :).
One of the first stages the project should have is its product vision. With the initial envisioning for the project, some brief definitions are really needed: who are the clients, the team, a high-level scope (and counter scope!), blueprints of the technical approach, potential risks, as well as estimated time and cost. A nice-to-have topic to be covered is the Vision statement - also known as “elevator pitch”, that should look more or less like the below.
Business Model Canvas should be used to shape the product to be built, taking a hand-on direction of defining business models. Used in conjunction with Lean Startup, this is a tool that can efficiently serve as a visual chart of ideas and perceptions of an existing or new business.
Therefore, formulate a full understanding as work in hypothesis and value propositions. Covering nine blocks: activities, partners, resources, value proposition, customers, customer channels, customer relationships, costs and revenue in a structured way. Business Model Canvas can play a vital ally role to plan a project.
The Product Backlog is a list of business and project goals and contains what is forecasted to be developed by the development team, maintained by the Product Owner. It is a living document, updated continuously, prioritized, and ordered by business value. It may also have product improvements, bugs, technical questions, etc. Its purpose is mainly to have everything that is needed to reach the project’s Product Vision.
Paulo Caroli created lean Inception as his adaptation and evolution of the Inception phase used at ThoughtWorks. The idea behind it is to combine Design Thinking and Lean Startup via a discovery workshop to define the product’s MVP. Using the interval of one week, the workshop typically aims to find the direction the team should take to build the ideal product. This approach can be seen as an extension of the “Product Vision” topic mentioned earlier. It also covers steps like the definition of personas, journeys, features, technical, UX and business reviews, etc throughout that one-week timeframe.
Product Design Process is how we, at Imaginary Cloud, define how to create digital products - and yes, we have a book about it. This approach, which we use internally in the projects we work in, is also used externally by different industry players. It focuses on covering the necessary steps to create a remarkable solution, bringing not only customers and product owners to the center of this discussion but also the users.
This process may take one to a few weeks depending on the complexity of the product and how deep we need to dig to define the solution). It includes 12 steps, going all the way from research, ideation, execution and technical assessment to investigate and identify the product’s trajectory in the best possible way.
We mentioned Product Backlog earlier in this post as a way to go for planning and to structure your product goals. It is also valid to show a method to work with it (assuming that User Stories are used to create and maintain your Product Backlog).
User Story Mapping is purely a technique that allows a visual breakdown - or “slicing” - of user stories in such a way that they can be tackled and addressed in a sequential manner that makes sense as a product, from the backbone to smaller details. This approach is valuable once it gives the context in the way that the features are split as a whole project, not merely a representation of a grouped list. Important to say that the definition of how thin or thick the slices are defined, targeting an end-to-end narrative, comes from direct interaction with customers/users.
Domain Driven Design - or DDD - is a concept used in software design to structure the software architecture models using an abstraction of the application’s business domain. It requires integration and collaboration from both technical and business sides (therefore using one of DDD’s main characteristics: ubiquitous language, targeting a common understanding of terms from all sides).
Since DDD focuses heavily on the definition of the domain layer, besides taking advantage of Object-Oriented Programming concepts, it got quite popular with the OOP community. However its general idea can be used regardless of the programming paradigm used, specially because it can be utilized as a foundation for practices like TDD, BDD, CI, refactoring, etc.
Techniques like using Entities, Value Objects or subdomains such as bounded contexts, building blocks like Services, Repositories, Factories and Events bring a strategic design to the application. It allows to combine domain’s structure, life-cycle and behaviour in a concise and related way.
Spike - a commonly used term in Agile coming from XP - refers to a type of user story used to explore and seek just enough understanding of an approach and therefore reduce its risk if taken. Architectural Spike goes one step further towards software design and architecture. It aims to define the backbone of the modeling architecture and how it will all work together, but with enough pragmatism, this solution is proposed with the yet limited existing information regarding the problem domain. During this practice, the definitions made frequently involve software layers, subsystems boundaries, very likely some working code and source control tools as a minimal/optimal skeleton of the application. It contributes to defining and being part of the System metaphor, a “simple shared story of how the system works”.
Not enough to say that, as the project and application evolve, its architecture should also be adapted and refined, being the Architectural Spike only the initial task towards that direction. This idea is discussed in practice covered in the section below.
The eleventh principle of the Agile Manifesto mentioned earlier says that “The best architectures, requirements, and designs emerge from self-organizing teams.” Speaking strictly about the design, you may still be wondering what that means. Emergent Design talks about building the solution evolutionarily, allowing its design and architecture be defined throughout the development journey. To use some jargon, instead of doing BDUF (Big Design Up Front), JEDI is commonly used (Just Enough Design Initially). Following this way of thinking, incrementally, gives developers room to focus on direct project needs, avoiding an early and suboptimal defined architecture - even though it is worth mentioning that the focus should be on addressing requirements other than just purely trying to predict the future.
Despite some criticism regarding the advantages of creating the definition of architecture (as opposed to defining strong and complete backbones of such a vital part of the application) should take massive advantage of what Agile can bring in an adaptive and learning environment.
The practice of doing Continuous Integration (CI) corresponds to having the main streamline of code that receives the changes or additions made by developers separately in a single software project repository/branch. This action should trigger a few steps like automated tests and syntax style review tools, usually orchestrated by a CI tool in conjunction with a Version Control Management System. XP suggests that this process should be done several times a day to guarantee that a running integrated version of the code exists.
CI is the first phase in a chain process that covers Continuous Deployment (an application is released in production if it succeeds with all the steps of the automated deploy process) and Continuous Delivery (having the codebase deployable to different environments at any given time).
The standard strategy of Continuous Integration suggested by Martin Fowler follows the below:
The main benefits observed with CI go from detecting bugs more efficiently, avoiding the overhead of manual integration process, always available build of environments, the process becoming more transparent (therefore enhancing the communication) and encouraging a more robust test coverage. CI also creates room for the implementation of practices such as pull requests and code review.
Test-driven development, or TDD, is known as the practice of test-first programming, which follows, by creating automated unit tests, repeatable flow of:
The main goals with this approach are to make the code clearer, simple and bug-free, while thinking better about the structure and considering the system's internal interfaces and responsibilities.
There are specific tools that support unit testing and TDD, commonly known as xUnit tools (JUnit, NUnit, XPyUnit, PHPUnit, etc), but not limited to them.
TDD is undoubtedly a paradigm shift for a developer who is not used to following those steps. A common taboo about TDD is that it requires too much effort and time to be spent, hence not being worthwhile. The rule of thumb in such cases is to find a middle ground and understand that TDD should bring both more tested and, therefore, cleaner code to the application.
It is essential to mention that TDD cannot be the only part of the application's quality assurance strategy, and this will be covered down below when we talk about QA. Also, the automated testscreated when using a TDD approach should surely be part of the application's Continuous Integration strategy, being one of the required steps to have that process done.
Probably the best definitions of Refactoring comes from Martin Fowler.
“A disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior”
Or...
“A change made to the internal structure of software, to make it easier to understand and cheaper to modify without changing its observable behavior”.
The needs of code refactoring frequently come from “code smell”: an indication of any reorganization due to weakness or potential problems present in the code. A common use of Refactoring is to pay the technical debt, one of the biggest project’s nightmares. Also, in TDD, the step that mentions the act of (re)writing code that passes the test is commonly also called Refactoring.
To some people, it may be challenging to understand the benefits of concentrating efforts on working on code already written. However, it can certainly raise the code’s maintainability, cohesion, readability, performance, and reusability, among other things that should justify the time spent. It is valid to reemphasize that Refactoring is not about creating new features, as this goes beyond its purpose. The target should always be to keep its current behavior (existing and/or new tests should be in place to guarantee that).
Some examples of the common utilization of Refactoring are: use of design patterns, polymorphism, encapsulation of fields, change the use of parameters, exceptions, etc.
Probably the best definitions of Refactoring comes from Martin Fowler.
BDD stands for Behaviour-Driven Development, and can be defined as “an approach to development that improves communication between business and technical teams to create software with business value”. BDD aims to be the binging point among business people, developers and QA testers (not to say all people involved in the project), in order to guarantee a shared understanding and uniform communication of the characteristics of the application. This is achieved by creating its specifications based on scenarios and examples, using a common pattern “Given-When-Then” to represent the actual behaviours of the solution.
ATDD (Acceptance Test-Driven Development) goes one step further, and uses the basis of BDD to implement coded acceptance tests taking into account the expected behaviour defined previously in the scenarios. ATDD resembles TDD in the sense that it generally automates a series of failing acceptance tests before creating code to make them pass, sharing a similar cycle with that approach.
Testing tools such as Behat, Cucumber or SpecFlow are examples of options that support the use of executable specifications, allowing the use of ATDD in the project, taking advantage of what was defined using BDD.
Although not being formally defined as an Agile practice, it is common sense in the community that automation in tests is the main structure to cope with quality assurance when it comes to Agile. It allows other practices such as ATDD, TDD, CI, among others to be as effective as possible.
As you may guess, automated testing is the form of using a separate piece of software to implement tests in your software. Either by checking the external interfaces, such as mobile or browser-based GUI testing, internal communication between layers, like APIs or even targeting performance analysis. The more evident and foremost benefit of using such an approach is to avoid the repetition present in manual processes, not to mention the likelihood of human error being introduced.
These advantages can be seen in some testing strategies like regression tests, or a Continuous Integration pipeline. It is important to consider when and what to automate, given the required effort to implement those types of tests. The level of automated testing coverage, whether as unit testing, integration testing or more broadly end-to-end testing, will request different efforts and bring different value, depending on the desired focus. Similar logic applies when deciding the toolset to be used, for which a conscientious evaluation is suggested. Selenium, Jasmine and RSpec are examples of different testing tools available for various testing purposes.
Session-based testing is another example of a testing practice that, despite not being officially defined as Agile, it has great adoption within the Agile world. This strategy of testing can be stated as a more structured way of doing manual exploratory testing, which basically means testing a piece of software without prior design or definition of test cases, freely seeking defects. This way, Session-based testing follows a “divide to conquer” exploratory idea, splitting timely tests into - guess what? -, sessions. It follows a set of steps (self-explained) namely: mission, charter, session, session report, debriefing and parsing, that helps cover with just enough details what is needed to achieve such a process.
Within the Agile context, multiple sessions can be defined for each user story, for example, covering more or less of each depending on their associated risks. Such flexibility allows this practice to cope with the fast pace present in Agile. Also, besides the high focus linked to automating tests and development-side tests, it is utterly vital to understand the value adjacent to manual testing when done effectively. It can reach certain aspects of the software that only those approaches cannot. Both manual and automated test strategies combined should do the necessary job to guarantee the quality assurance needed for a project.
DevOps stands for combination and collaboration between Development and IT Operations teams to achieve continuous and fast delivery. This approach gets both sides working together, reassuring the importance of their communication and integration by using the concept of Infrastructure as a code (IaaC). The main steps to reach that involve: Infrastructure Automation (having systems, configurations, and app deployments as code in the overall project’s structure); Continuous Delivery (build, test, deploy apps in an automated and timely way), and Site Reliability Engineering: (operate systems, meaning monitoring and orchestration, guaranteeing they support such functionalities from the beginning).
This way the “DevOps ladder” below is set and present in the project.
Using DevOps show some advantages like reassuring scalability, reliability, security, rapid delivery (and therefore faster time to market, if needed), shorter Mean Time To Recovery (MTTR), prevention of human-error risk, lower failure rate on new releases, among others. DevOps is undoubtedly seen as a complementary practice when using Agile. It brings aspects such as frequent delivery, early detection of errors, higher transparency when it comes to monitoring an application, etc., being considered part of Agile methodologies SAFe.
This practice covers topics already described as part of Continuous Integration and DevOps, not to mention the correlation with automated testing. Therefore, binding all concepts together, we can define Continuous Deployment as next step of Continuous Integration. That makes use of automated testing to ensure that a correct code will be automatically released into the production environment, usually done by taking advantage of DevOps infrastructure tools. The automated release is, in fact, a difference and common misconception when it comes to Continuous Delivery, as although they share the same abbreviation, in the latter, the “go to production” action is commonly manual. Continuous deployment is the complete end-to-end, automated software deployment flow. It can be implemented to take place as many times as needed for a given application, according to business needs.
The typical steps present in Continuous Deployment are:
These steps should give enough guarantee that the piece of code created is sufficiently well covered and checked. Hence it is mature to reach production without major threats, enabling the option to revert that chance easily from the pipeline if necessary. There are certainly worries about automating such an important step, and the associated risks are indeed present. Also, it is needed to mention that having such a structure built around your application carries its costs. Still, the payoffs that such approach brings can be a core characteristic of a successful project and software solution.
If you have reached this part of the article, it should be fair to assume that you know what Kanban is. In a nutshell, it can be described as a workflow management method to visualize the work that it controls. It was created in the Japanese lean manufacturing field, more precisely Toyota Production System.
More recently, the main ideas coming from this way of working became to be implemented in the software industry, creating what is now called Kanban Method, which uses as its primary artifact: the Kanban board. This board is the leading real-time visual repository of information and progress of a given process, bringing light to possible blockers and bottlenecks in a straightforward way. The columns present in the board represent steps in the flow and the concept of Work In Progress (and its limit, per column), which is a valuable resource. The whole idea is to have your tasks (tickets, issues, you name it) flowing throughout each column in the board.
Kanban is an evolutionary method that can be implemented very easily (and progressively evolve its process and use) to get things done. Also, because it is nothing more than a non-disruptive change management system, Kanban has been greatly applied in Operations and Maintenance projects. It takes advantage of the points mentioned to implement in an on-demand, just in time environment, where initiatives like Scrum only bring overhead and waste.
Kanban is not strictly defined as an Agile practice. However, it is used to implement Agile and lean principles while increasing both employees and customer satisfaction.
The goal of this article was to present real options that can be used in any SDLC (Software Development Life Cycle) with a strong focus on Agile practices.
This approach is here to stay and such suggestions have been used, discussed and improved throughout the years. They solve common problems present on the day-to-day tasks part of software development projects.
Despite being some of the practices more or less common and known, at Imaginary Cloud we evaluate first each scenario in our projects. Then, we apply practices and techniques that allow the solutions we build to reach the next level and hit successful deliveries.
Based on our team’s expertise, we can define and recommend how to make the best use of Agile practices and when to apply them into the life cycle of your web application or UI/UX Design projects.
Think you could need a hand? Get in touch!
Found this article useful? You might like these ones too!
Passionate about software stuff and agile. Can be easily found cooking, playing volleyball or spending some time with video games.
People who read this post, also found these interesting: