A GitHub Action Template for a dotnet NuGet package

I’ve recently been using GitHub Actions a lot more heavily and as you know, I have a lot of GitHub repositories that become NuGet packages. I have been using AppVeyor and have had no issues with it. The only reason I’m switching to GitHub actions is to learn new things and perhaps to be able to do it all in one site.

This GitHub action template will generate a NuGet package for any repository that is setup as a Microlibrary (See We are already in the age of Microlibraries).

Requirements to use the DotNet Nuget Package Workflow

This template is easy to use.

Requirements

  1. It is for a microlibrary.
    A microlibrary is basically a repository with only one dll project. In C#, that means a repo is setup with only one Visual Studio solution that usually has two projects:

    1. The DLL project
    2. The unit test project (The template will fail without a unit test project.)
  2. The template assumes you have everything in a directory called src.
    1. The solution is in the src directory.
    2. There is a nuget.config in the src directory
  3. The DLL project is configured to build a NuGet package on Release builds.
    Note: Add this to your csproj file:

    <GeneratePackageOnBuild Condition="'$(Configuration)'=='Release'">True</GeneratePackageOnBuild>
  4. The GitHub actions template is not in the src directory, but in this directory
    .github\workflows
  5. This template publishes to NuGet.org and you must create a key in NuGet.org, then in GitHub repo settings, make that key a secret called:
    NUGET_API_KEY

Options

Note everything is required.

  1. Versioning is created using the Build and typed in versions.
    1. Changing the version is easy. Just update the yml file.
    2. Want a new version to start at 0? (For example, you are at 1.1.25 and you want to go to 1.2.0)
      1. Simply set the base offset found below in the ‘# Get build number’ section of the template to subtract the build count.
        For example, if you are on build 121 and your next build will be 122, set the value to -122.
  2. Code Coverage
    1. You can enforce code coverage and get a nice report in pull requests for the provided coverage.
      1. Chaning the code coverage percentage requirement is easy.
      2. Disabling code coverage is an option.
    2. The code coverage tool used doesn’t work with windows-latest. Notice the yml file says:
      runs-on: ubuntu-latest
      However, you can run on windows-latest, and this template will simply skip those lines.
  3. There is an option for you to have a vNext branch that will build prerelease versions.
    If you want your vNext branch to be named something else such as future or current then you can just find and replace vNext with the desired branch name.
  4. You can change the version of dotnet:
    dotnet-version: [ ‘8.0.x’ ]
# Created by Jared Barneck (Rhyous).
# Used to build dotnet microlibraries and publish them to NuGet
name: CI - Main

# Controls when the workflow will run
on:
  # Triggers the workflow on push events to the "master" or "vNext" branches
  push:
    branches: [ "master", "vNext" ]
    paths-ignore:
      - '**.md' 
      - '**.yml' 
      - '**/*.Tests/**' 
      - '**/.editorconfig' 
      - '**/.editorconfig' 
      - '**/.gitignore' 
      - '**/docs/**' 
      - '**/NuGet.Config' 
      - '.gitignore'
      
  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:
 
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
  # This workflow contains a single job called "build"
  build:
    # The type of runner that the job will run on
    runs-on: ubuntu-latest
    defaults:
      run:
        # There should only be one solution file (.sln) and it should be in the src dir.
        working-directory: src
      
    strategy:
      matrix:
        dotnet-version: [ '8.0.x' ]

    # Steps represent a sequence of tasks that will be executed as part of the job
    steps:
      # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
      - uses: actions/checkout@v3
      
      # Get dotnet setup and ready to work
      - name: Setup .NET Core SDK ${{ matrix.dotnet-version }}
        uses: actions/setup-dotnet@v4
        with:
          dotnet-version: ${{ matrix.dotnet-version }}

      # Restore nuget packages
      - name: Restoring NuGet packages
        run: dotnet restore
        
      # Get build number
      - name: Get Build Number with base offset
        uses: mlilback/build-number@v1
        with:
          base: -8
          run-id: ${{github.run_number}}
        
      # Build - Main
      - name: Build source
        if: github.ref == 'refs/heads/master'
        run: dotnet build --configuration Release --no-restore -p:AssemblyVersion=1.3.0 -p:FileVersion=1.3.${{env.BUILD_NUMBER}} -p:Version=1.3.${{env.BUILD_NUMBER}}

      # Build - vNext
      - name: Build source
        if: github.ref == 'refs/heads/vNext'
        run: dotnet build --configuration Release --no-restore -p:AssemblyVersion=2.0.0 -p:FileVersion=2.0.${{env.BUILD_NUMBER}} -p:Version=2.0.${{env.BUILD_NUMBER}} --version-suffix alpha
        
      # Run Unit Tests
      # Add coverlet.collector nuget package to test project - 'dotnet add &amp;amp;lt;TestProject.cspoj&amp;amp;gt; package coverlet
      - name: Run Tests
        run: dotnet test --no-build --configuration Release --verbosity normal --collect:"XPlat Code Coverage" --logger trx --results-directory coverage --filter TestCategory!=SkipCI
        
      # Install ReportGenerator
      - name: Install ReportGenerator
        run: dotnet tool install -g dotnet-reportgenerator-globaltool
        
      # Run ReportGenerator
      - name: Run ReportGenerator
        run: reportgenerator -reports:./coverage/*/coverage.cobertura.xml -targetdir:coveragereport -reportType:Cobertura
        
      # Code Coverage
      - name: Code Coverage Report
        if: runner.os == 'Linux'
        uses: irongut/CodeCoverageSummary@v1.3.0
        with:
          filename: '**/Cobertura.xml'
          badge: true
          fail_below_min: true
          format: markdown
          hide_branch_rate: false
          hide_complexity: true
          indicators: true
          output: both
          thresholds: '60 80'

      - name: Add Coverage PR Comment
        uses: marocchino/sticky-pull-request-comment@v2
        if: runner.os == 'Linux' &amp;amp;amp;&amp;amp;amp; github.event_name == 'pull_request'
        with:
          recreate: true
          path: code-coverage-results.md

      # Publish NuGet
      - name: Publish the NuGet package
        if: ${{ (github.event_name == 'push' || github.event_name == 'workflow_dispatch') &amp;amp;amp;&amp;amp;amp; github.ref == 'refs/heads/master' }}
        run: dotnet nuget push "**/*.nupkg" --source "https://api.nuget.org/v3/index.json" --api-key ${{ secrets.NUGET_API_KEY }} --skip-duplicate


Where Y.A.G.N.I. falls short

I am a big believer in the YAGNI (You aren’t gonna need it) principle. But it applies to features, not coding practices. It is more of a way to say: Code the MVP (minimal viable product) first. However, when applied outside of that realm, it can be harmful. For example, when talking about ‘how’ to write code, it often doesn’t apply because there are so many instances where you are gonna need it.

YAGNI is for what you code, now how you code.

With how you code, you are gonna need it:

  1. Quality – You are gonna need quality.
  2. Maintainability – You are gonna maintain it.
  3. Replaceability – You are gonna replace it.
  4. Testability – You are gonna test it
  5. Security – You are gonna need security.

Quality – You are gonna need quality

Code should always be written with quality. There are a lot of principles that are guidelines for quality, starting with SOLID. If you struggle to understand the SOLID principles or think they are too general, then I would suggest you follow my SOLID Training Wheels until you understand them better.

You may have heard of the Quality-Speed-Cost Triangle. The triangle is used in manufacturing with the following rule: You can only choose two of the three. Because code is like manufacturing, some believe this triangle applies to code. It doesn’t. Software is not physical, it is virtual. Once you create a piece of code, you can run that code a million times in what a human would consider almost instant.

You can use the Quality-Speed-Cost Triangle with code, but with code, the triangle does not have the same rules. For example, the rule that you only get two doesn’t apply.  Why? Because Quality is the only way to get speed and low cost.

In code, the rule is: Quality gives you the other two.

Unlike manufacturing physical products, software actually gets faster and cheaper when you increase the quality. You can’t have code speed without code quality. You can’t have low cost with code quality.

So focus on SOLID. The S (Single Responsibility Principle) and I (Interface Segregation Principle) both actually mean smaller objects (mostly classes and methods) in code. Smaller building blocks lead to faster code. When you write with smaller building blocks, there is less duplication. Every line of code has a cost to both create and maintain. Duplicate code destroys speed and raises costs. So smaller or ‘single’ will always be cheaper and faster.

Maintainability – You are gonna maintain it

If your company goes out of business (or your open source project dies), maybe your code isn’t maintained. But is going out of business your goal? If not, then your code is going to last. Every line of code has a maintenance cost.

The smaller or more ‘single (S in SOLID)’ the code is, the easier it is to unit tests, the less likely it is to have bugs, the less likely it is to change (part of O in SOLID), and the more like it is to be finished and never touched again. If most of your code is SOLID, small, unit-tested, replaceable building blocks, your maintenance costs will stay extremely low. Finished code leads to low maintenance costs.

Replaceability – You are gonna replace it

Systems die and get old. Code that uses systems will die with those systems. If the code is like the arteries in a human, entwined in everything everywhere, there is no replacing it. If the code is more like car parts, you can replace it with some work. If the code is more like computer peripherals, where you can add them, remove them, disable them in place, and replace them, then you are going to be faster. Quality is going to be easier because replacing pieces is going to be cheaper.

In SOLID, the S and I make things smaller, and usually (though not always) smaller is easier to replace. The O hints that you should replace code instead of changing code. The L and D are all about replaceability. Replaceability directly affects quality and future cost. If you are using MS SQL or Oracle and suddenly need to scale to hundreds of database servers on the cloud, you want your repository to be replaceable so you can migrate easily to a cloud database by replacing your repository.

Many companies who wrote monoliths without replaceable parts are now suffering this reality, as they either fail to replace their code with modern cloud architecture or spend exorbitant amounts to rewrite.

Every letter of SOLID in some way hints at replaceability. S – single, and it is easier to replace a single thing. O means the code is closed for changes, and so any new functionality goes in new code that likely replaces or extends the old code, the L is about replacing parent classes with child classes, and the I is about having small interfaces that are easily replaced with other implementations, and the D is literally about being able to inject any implementation from one to many interchangeable (or replaceable) implementations by inverting the dependency (The D is dependency inversion not depending injection, but a lot of people are now saying it is the latter).

Testability – You are gonna test it

This is almost the same as Maintainability, but while similar, it is different. Unit Tests help finish and close code without bugs. The more tests you have, the less likely you are to break anything as you add features to your code base.

SOLID doesn’t cover testing. But other principles, such as TDD, and industry standards, such as having greater than 70% code coverage (I say you should have 100% and close your code), all indicate that testability is key to speed and keeping costs down.

If everytime a dev makes a change, they introduce a bug due to lack of unit tests or automated tests, the costs will grow and the speed will slow as work stops to fix bugs.

However, if the tests warn you of the bugs as your are coding, the cost will stay low and there won’t be bumps (aka bugs) in the road slowing you down.

Security – You are gonna replace it

SOLID also doesn’t discuss security, but you are gonna need it.

So many people said they were never replace their database and yet are trying now to replace their database with cloud options.

Law suites from breached data are not cheap. Trying to bolt on security is not cheap. Unless you made everything easily replaceable, then it can be a low-cost task to replace implementations with secure implementations later in the process. If code is replaceable, adding security becomes much easier.

Conclusion

YAGNI is a good rule of thumb to live by for what you code, i.e. features and the MVP. However, YAGNI, will ruin your code base if you apply it to how you code.

 


Scenario Weaving – a Software Development Antipattern

In Software Engineering, there is a term called Cyclomatic Complexity. It is a measurement of the number of branches in a method. There is a huge correlation between high Cyclomatic Complexity and bugs.

Today, I want to discuss an antipattern that leads to high Cyclomatic Complexity: Scenario weaving.

What is Scenario Weaving?

Scenario weaving is when a method handles multiple scenarios. Yes, this clearly breaks the Single Responsibility Principle.

Any time you see multiple “if” conditions in a method, you can be sure that the code is likely doing scenario weaving.

If you see a switch/case statement, you can be 100 sure that the code is doing scenario weaving as each case is a scenario. By the way, you should almost never be using the switch/case statement anymore. See  What to do instead of using switch/case.

Scenario Weaving Example

This is a bad, antipattern example of scenario weaving.

This is simple example Cookie Monster code. For each cookie type, have the closest monster that likes that cookie type eat it.

public void EatCookie(Cookie cookie)
{
    if (cookie.Type == "ChocolateChip")
    {
        if (chocolateMonster.IsCloser())
           chocolateMonster.Eat(cookie);
        else if (cookieMonster.IsCloser())
           cookieMonster.Eat(cookie);
    }
    if (cookie.Type == "MacadamianNut")
    {
        if (nutMonster.IsCloser())
            nutMonster.Eat(cookie);
        else if (cookieMonster.IsCloser())
            cookieMonster.Eat(cookie);
    }
    // ...
}

What type of cookie and which monster is closer can create multiple scenarios. As you can see, this method is trying to handle all the scenarios. Multiple scenarios have been weaved together in this method. To add more scenarios, our code just gets more and more complex. It breaks every letter of SOLID, forces us to repeat code a lot breaking DRY and really becomes a beast to unit test. The more cookies, flavors, etc., the more complex this method gets.

Scenario Weaving Refactored Example

This is a good, best-practice pattern example of code that doesn’t use scenario weaving and has low cyclomatic complexity.
This is simple example Cookie Monster code. For each cookie type, have the closest monster that likes that cookie type eat it.

public class MonsterSelector : IMonsterSelector // Interface omitted for brevity.
{
    // Maps each cookie type to the monsters that like that cookie type. Inject this in the constructor.
    Dictionary<string, List<Monster>> _monstersByCookieType; 

    // Constructor - omitted for bevity

    public IMonster SelectBy(ICookie cookieT)
    {
        var monsterList = _monstersByCookieType[cookie.Type];
        return monsterList.SelectClosestMonster();
    }
}

public void EatCookie(Cookie cookie)
{
    var monster = monsterSelector.SelectBy(cookie);
    monster.Eat(cookie);
}

What type of cookie and which monster is closer still creates the same multiple scenarios. However, the above code handles and infinite number of scenarios without change and without increasing complexity. Also, this code is really easy to unit test.

Yes, more cookie types can exist. Yes, more monsters can exist. Yes, monsters can be at various distances from a cookie. Doesn’t matter. The code works in every scenario.

Now, in this above naive case, we solved all scenarios with one piece of SOLID code. However, sometimes you might have specific code per scenario. Imagine that you have some Monsters that eat in a special way and inherit from the IMonster interface . You would have to write there special Eat method separately for each special monster.

The important concepts here:

  1. Create a scenario selector.
  2. Have code that handles each scenario and make sure that code all implements the same scenario handling method signature (i.e. a scenario handling interface).

Conclusion

Scenario weaving is an antipattern that leads to complex buggy code, where the bugs can affect multiple scenarios. Such code leads to low quality code that breaks SOLID and DRY principles and is difficult to unit test.

Scenario selecting and having separate code to handle each scenario leads to bug free code, or at least a bug only affects one scenario. Such code leads to high quality code that follows SOLID and DRY principles and is easy to unit test.


What is a Lazy Injectable Property in C#?

The following is an example of a lazy injectable. If you are not working on legacy code without Dependency Injection (DI) and Inversion of Control (IoC) with constructor injection, then this article is likely not relevant to you.

/// &amp;lt;summary&amp;gt;This is an example of a Lazy Injectable that will eventually be replaced with constructor injection in the future.&amp;lt;/summary&amp;gt;
internal IMyNewClass MyNewClass
{
    get { return _MyNewClass ?? (_MyNewClass = new MyNewClass()); }
    set { _MyNewClass = value; }
} private IMyNewClass _MyNewClass;

Another example is very similar but used IoC.

/// &amp;lt;summary&amp;gt;This is an example of a Lazy Injectable that will eventually be replaced with constructor injection in the future.&amp;lt;/summary&amp;gt;
internal IMyNewClass MyNewClass
{
    get { return _MyNewClass ?? (_MyNewClass = IoCContainer.Resolve&amp;lt;IMyNewClass&amp;gt;()); }
    set { _MyNewClass = value; }
} private IMyNewClass _MyNewClass;

Why use a Lazy Injectable?

Normally, you wouldn’t. Both examples above are antipatterns. In the first example, while it does use the interface, it also couples the code to an implementation. In the second example, it couples the code to an IoC container. This is essentially the well-known ServiceLocator antipattern.

However, it is very handy to use temporarily when working with legacy code to either write new code or refactor legacy code into with modern, quality coding practices. Even the top book on DI/IoC for CSharp mentions that less effective antipatterns can be usefully temporarily when refactoring legacy code.

Example Of When to Use

We all know that Dependency Injection (DI) and Inversion of Control (IoC) with constructor injection of interfaces, not concretes, is the preferred method of writing code. With DI/IoC and constructor injection, it is straight-forward to do object composition in the composition root. However, when working in legacy code without DI/IoC, that isn’t always possible to. Let’s say there is a class that is has over 100 references in code you can’t touch to the constructor. Changing the constructor is not an option. You need to change code to this legacy class. Remember, it doesn’t support DI/IoC with constructor injection and there is not composition root.

Let’s call this legacy class, MyLegacyClass.

However, you want to write you new code with DI/IoC with constructor injection.

public interface IMyNewClass
{
    void SomeMethod();
}

public class MyNewClass 
{
    private readonly ISomeDependency _someDependency;
    public MyNewClass(ISomeDependency someDependency)
    {
        _someDependency = someDependency;
    }

    public void SomeMethod() &amp;amp;amp;amp;lt;/pre&amp;amp;amp;amp;gt;&amp;amp;amp;amp;lt;pre&amp;amp;amp;amp;gt;    {
         // .. some implementation
    }
}

So without a composition root, how would we implement this in MyLegacyClass? The Lazy Injectable Property.

public class MyLegacyClass 
{
    public MyLegacyClass ()
    {
    }

    /// &amp;lt;summary&amp;gt;This is an example of a Lazy Injectable that will eventually be replaced with constructor injection in the future.&amp;lt;/summary&amp;gt;
    internal IMyNewClass MyNewClass
    {
        get { return _MyNewClass ?? (_MyNewClass = new MyNewClass()); }
        set { _MyNewClass = value; }
    } private IMyNewClass _MyNewClass;

    public void SomeLegacyMethod()
    {
         // .. some implementation
         MyNewClass.SomeMethod();
    }
}

What are the benefits? Why would I do this?

You do this because your current code is worse and you need it to be better, but you can’t get from bad to best in one step. Code often has to refactor from Bad to less bad, then from less bad to good, and then from good to best.

New code using new best-practice patterns. New code can be easily unit-tested.

Old code isn’t made worse. Old code can more easily be unit tested. The Lazy part of the property is effective because the interface can be mocked without the constructor of the dependency ever called.

This Lazy Injectable Property basically creates a micro-composition root that you can use temporatily as you add code or refactor. Remember, the long-term goal is DI/IoC with constructor injection everywhere. The process has to start somewhere, so start with new code and then require it for changed code. Eventually, more and more code will support constructor injection, until after a time of practicing this, the effort to migrate fully to DI/IoC with constructor injection everywhere becomes a small project.

 


Is your code a bug factory?

Many code bases are “working” but not quality. Often, there are so many bugs coming in, it is hard to work on new features. It can be hard to express to upper management why so many bugs exist. This is a quick estimate evaluation that easily give you a quantifiable number representing

 

whether you are bug factory or not.

You’ve heard the saying, “An ounce of prevention is worth a pound of cure.” Well, in code, you can say: And ounce of quality will prevent a pound of bugs.” But a saying is just that. How can we make that measurable? This questionnaire will do just that: make this saying measurable.

This is a golf-like score where the lower, the better. The highest score (bad) is 1000.

YOUR SCORE % QUESTION DEFAULT VALUE NOTE 1 NOTE 2
Is your code Tested? Non-scoring header line.
What percent of your code is not covered by unit tests? 200 Enter the % of uncovered code, multiple by 2. Use the default value  of 200 if you don’t know the code coverage or don’t have unit tests.
When you fix a bug, do you first create a unit test that exposes the =
bug?
50 Enter the estimate % of times this is not practiced, divided by 2. Use the default value if you don’t expose the bug with a Unit Test be=
fore
fixing it.
What percent of your customer use cases are not covered by automated
tests?
100 Enter the % of customer use case that are not covered. Use the default value of 100 if you don’t have this information.
Does your code follow common Principles such as 10/100,
S.O.L.I.D. or DRY
Non-scoring header line.
What percent of
the code breaks the 10/100 principle?
125 Enter the % of code that doesn’t follow the 10/100 principle, multiple by 1.25.  Use the default value of 125 if you have a lot of large files and methods or you
don’t know the % of code that follows the 10/100 principle.
S – What percent of classes have more than 1 responsibility? 100 Enter the % of classes that break the S in SOLID? Hint: Estimate 1 responsibility for every 50 lines. If you can’t get this information easily, use the default of 100.
O – What percent of your code never changes because it is tested, stable,
bug free, and open for extension?
50 Enter the % of classes that never change but can be extended? Use the default of 50 if you have no idea.
L – Do you have a best-practice for testing inheritance? 25 Enter the % of inherited classes that don’t have inheritance tests divided by 4. If you don’t know what the L means and/or no inheritance tests exist, use
the default of 25.
I – What percent of interfaces have more than 10 methods/propert
ies?
50 Enter the % of interfaces that have more than 10 combined methods and properties, divided by 2. Use the default value if you don’t know, or if the code has few interfaces.
D – What percent of you code is using dependency injection? 150 Enter the % of classes that don’t use Constructor Injection, multiplied by 1.5. Use the default value if you aren’t using DI. If you have DI but don’t
use it properly do 75%.
DRY – Do you have standards and practices to prevent duplicate code? 50 Enter the % of code that could be considered duplicate code, divided by 2. Use the default of 50 no standards  to prevent duplicate code exist. If you have common  libraries but they are a mess, use 25%.
Do you do pair programming at least 1 hour a week? 25 Enter the % of developers that do not practice pair programming for 1 hour as week. Use the default if pair programming is not a weekly practice.
Do you require code reviews? 25 Enter the % of check-ins that do not have code reviews, divided by 4. Use the defaul if you don’t require check-ins. If you require code reviews but have no standard code review check list, enter 13.
Do you have a 1-step CI/CD process to 1) build, 2) run unit tests, 3) deploy, 4) run automated tests, 5) gating each check-in? 50 Enter 0 if you have it all. Enter 10 for each step you don’t have. Use the default of 50 if you don’t have any of it, or if devs can build locally in 1 step after check-out.

Now, add up the score in each column. Enter the score into this statement.

This code is ______ times more likely to create bugs than average.

Reacting to the Code Score

These are generalizations about what to do based on your score.

Warning! Neither me nor this blog are your company’s CTO or expert. Each project is different. These generalized suggestions should be analyzed by a company’s CTO and the expert team members the CTO trusts. It is solely their responsibility to make any decisions.

1000 Max Worst Possible Score.
700+ Emergency: The code is in trouble and needs everything refactored, which will be very expensive. You need to heavily weigh the pros and cons of a rewrite. For both a rewrite or a refactor, much of the code and logic can remain in place. One option is to bring in an expert who can help you follow the strangler pattern to replace your code with new code a piece at a time. If the code is a coupled single piece, there may be prerequisite projects to decouple a piece.
300-499 There may be a few problem areas. Track bug areas and improve the code quality of those buggy areas. You may need to look at the code weaknesses and address them as projects, as incremental improvements may not be enough.
100-299 There may be a few problem areas. Do incremental improvements.
< 100 Keep up the good work.

So now you know whether your code is a bug factory or not. Congratulations on knowing the state of your code.


How to rate your code from 1 to 10?

Rating code can be hard. Especially when reports to upper management are required. This can seem difficult and daunting. However, it doesn’t have to be. The following questions can be answered in 5 minutes by someone with as little as 90 days in a code base. Or it can take someone unfamiliar with the code base time to do some spot checks (at least 5).

I’m not saying this is the most accurate measure. Neither is T-Shirt sizing a story. This is a quick generalization that actually works for rating code quickly. I’m not saying a full tech audit should be replaced with this simple questionnaire, but you are less likely to be surprised by a tech audit’s results if this is done this first. Also, this will give plenty of ideas for what to work on.

10 out of 10 Code

This is a simple code quality questionnaire.

Note: If a lead dev can answer this off the top of their head, great, have them do so. Otherwise, this may take time. Each question has some hints underneath with questions that can be answered positive and negatively and how to check some objects. The scoring assumes 5 objects are checked. Spot checking 5 objects from various parts of the code should be enough.

Scoring:

  • Yes = 1 (positive answers and 4 out 5 object checks is a yes, otherwise, it is a no.)
  • No = 0

Questionnaire:

  1. The code has 70% code coverage.
    Just trust your coverage tool. But you should know not all coverage tools are the same. For example, if testing a bool, if only one test exists passing only true or false but not both, one code coverage tool might call it 100% covered and one might consider it 50% covered.
  2. S = Single Responsibility Principle
    Does each object in your code have 1 responsibility? Don’t know off the top of your head? Check 5 classes from various parts of the code base. Think smaller. Broad responsiblities are not single.
  3. O = Open/Closed Principle
    Does your architecture lend to avoiding breaking changes. Look at a number of check-ins that touch existing code. Do they include breaking changes (i.e. a change to a public signature.)
  4. L = Liskov Substitution principle
    Find inherited classes. Should they be using inheritance or should they have been written with a “has a” over an “is a”. Do they have the ability to crash where a parent wouldn’t?
    If you don’t use inheritance because you always use “has a” over “is a” or because you always use interfaces, then give yourself a yes here.
  5. I = Interface segregation principle
    Check your interfaces, are they very small? No more than 5 total properties/methods on the interface on average?
    If most objects don’t have interfaces, this is a fail.
  6. D = Dependency Injection (with most objects using constructor injection)
    Does the code entry points have a composition root? Yes/no? Alos, check 5 classes from various parts of the code base. How many classes use constructor injection? 95%? Yes/no?
  7. Dry = Don’t Repeat Yourself (the inverse of S, not only one responsibility, but no more than one class has that one responsibility).
    Do you have duplicate code doing the same thing? Are there groups of lines copy and pasted in many places, etc. If you have almost no duplicate code, you get a point here, otherwise, if you have lots of duplicate code, you don’t get a point here.
  8. Cyclomatic Complexity average under 3.
    What are the average number of branches in a method? Don’t know. Spot check 5 methods from different parts of your code. Or just guess based on number of lines in a method. Not including the signature, are the guts of each method 10 lines or less with an average of 5 lines.
  9. The code follows 90% the Joel Test and/or the 12 factor app?
    Google “Joel Test” or “12 factor app” – If you follow 90% of the items on those lists, you get a point, otherwise you don’t
  10. The code has a gated CI/CD end-to-end pipeline that goes all the way to production
    1. Applications’ CI/CD pipeline includes 2 code reviews, build, unit tests, deploy, automated tests only deployed environment, then only if all passes, a merge to master is allowed, and after check-in, the main build repeats much of the above and deploys all the way to production?
    2. Library’s CI/CD pipeline includes 2 code reviews, build, unit tests, runs any automated tests, then only if all passes, a merge to master is allowed, and then the library is published as package.
  11. Even if you are at 0, if your code is working and making your company money, and still selling new customers, give it a +1. Bad code that works is still a solution and is still better than nothing and deserves 1 point.

What is your code quality score out of 10?

Reacting to the Code Score

These are generalizations about what to do based on your score.

Warning! Neither me nor this blog are your company’s CTO or expert. Each project is different. These generalized suggestions should be analyzed by a company’s CTO and the expert team members the CTO trusts. It is solely their responsibility to make any decisions.

  • If you are 10 out of 10, keep doing what you are doing.
  • If you are 7-9 out of 10, keep doing incremental improvements.
  • If you are 4 to 6 out of 10, you may need to look at your weaknesses and address them as projects, as incremental improvements may not be enough.
  • If you are 2-3 out of 10, you need multiple major refactor projects. Likely you don’t have Dependency Injection. Start your big projects by adding it.
  • If you are 1 out of 10, you need everything refactored, which will be very expensive, and you need to heavily weigh the pros and cons of a rewrite. For both a rewrite or a refactor, much of the code and logic can remain in place. One option is to bring in an expert who can help you follow the strangler pattern to replace your code with new code a piece at a time. If the code is a coupled single piece, there may be prerequisite projects to decouple a piece.

Congratulations on knowing the state of your code.


Mono Repo vs Micro Repo – Micro Repo wins in a landslide

A while back, I had to make the decision between a MonoRepo or a Micro Repo for my Rhyous libraries.
I chose Micro Repos because it seemed to be “common sense” better. I’m so glad I did.

What is a Mono Repo?

1 Repo for all your projects and libraries

What is a Micro Repo? (i.e. Poly Repo or Multi Repo)

1 Repo for 1 library. Sometimes Micro Repos are called Poly Repos or Multi Repos, but those terms don’t really imply the the 1 to 1 ratio or the level of smallness that Micro Repo implies.

Why My Gut Chose Micro Repos

I do not have the amount of code a large company like Microsoft or Google has. However, for the average developer, I have way more open source projects on my GitHub account.

Despite choosing micro repos over a mono repo for my open source, I have more experience with mono repos because the companies I worked for have had mono repos. Also, I chose a mono repo for a couple of my open source projects. I have zero regrets for any of my poly repos, but I regret all of my mono repos.

My employers were mono repos because they grew that way. It wasn’t a conscious, well-informed decision. To the last of my knowledge, they continue to be mono repos only because they are stuck. It is too hard (they think) to break up their mono repo.

I do not enjoy working with my mono repos.

It might sound awesome to have everything in one place. It isn’t. We’ve already proven this time and again.

Stop and think about what it looks like when you put all your classes and methods in one file. Is it good?
Think about that for a minute. Smaller is almost always better.

If this article sounds bias toward micro reops, it probably is, because I have long ago seen micro repos as the clear winner to this argument and struggle to find reasons to support a mono repo.

The goal of this article, is to show that the move to microservices isn’t the only recent movement, as we are well into the age of Microlibraries, and the move to microservces and microlibraries also includes a move to micro repos.

GitHub won the source control world and dominates the market. They won for many reasons, but one of those reasons is how easy they made working with micro repos.

The Dev world is better with micro repos. Your source code will be better with micro Repos.

I am writing a book called “Think Smaller: A guide to writing your best code” and before I unequivocally declare micro libraries as the way to go, I need to do an analysis on it because gut feelings can be wrong. The goal of this analysis is to investigate if my gut was wrong. It pains me to say it, but my gut has been wrong before. This time it wasn’t. Now here is the analysis of why my gut was right.

Mono Repo

Mono Repo with:

  • Direct project references (instead of use of package management)
  • Automated CI/CD Pipelines

Pros of Mono Repos

If a pro that is shared with micro repos, it is not listed.

  •  Atomic Changes/Large-Scale Code Refactoring – For a given set of code openable by an IDE as one group of code (often called a solution) you can do large scale refactoring using the IDE. There is a lot of tooling in the IDE around this.
    – However, when a mono repo has multiple solutions, you don’t get that for the other solutions. After that you have to write scripts, in which case, you get no benefit over micro repos.

Yes, it is true. I found only 1 pro. If you have a pro that is truly a pro of mono repo that can’t be replicated with micro repos, please comment.

Pros from other blog posts (Most didn’t stand up to scrutiny)

I did a survey of the first fie sites I found on a google search that list Mono Repo pros. Most of these turned out not to be pros:

  1. https://betterprogramming.pub/the-pros-and-cons-monorepos-explained-f86c998392e1
  2. https://fossa.com/blog/pros-cons-using-monorepos/
  3. https://kinsta.com/blog/monorepo-vs-multi-repo/
  4. *https://semaphoreci.com/blog/what-is-monorepo
  5. https://circleci.com/blog/monorepo-dev-practices/

* Only site that had real-world use cases.

Now, remember, just because someone writes in a blog (including this one that you are reading) that something is a pro or a con, you shouldn’t trust it without evidence and argument to back it up. A survey of such pros found most of the data is “made up” for click-bate. Most of the so-called pros and cons in these articles don’t hold up to scrutiny.

I will tag these blog-post-listed benefits after my analysis.

True = It is a benefit of only a mono repo
False = It is not a benefit of a mono repo at all. It is con listed as a pro.
Shared = You get this with both Mono Repos and Micro Repos

  • One source of truth — Instead of having a lot of repositories with their own configs, we can have a single configuration to manage all the projects, making it easier to manage.
    (False)
    Why false? Micro libraries are actually more of a single source of truth for any given piece of code. With a mono repo, every branch has a copy of a library even if there are no plans to edit that library. Many teams end up using many branches. Teams end up with dozens of branches and no one ever knows which ones have changes or not. Devs often don’t know which branch has the latest changes. There is nothing further from one source of truth.
  • Code reuse/Simplified Dependency Management — If there is a common code or a dependency that has to be used in different projects, can it be shared easily?
    (Shared or False)
    Why Shared? Sharing code is just as easy with Micro Repos. Publish the code to a package management system and anybody can share your code.
    Why False? There are huge burdens to sharing code as files as opposed to using a package managers such as npm, maven, nuget, etc. If 10 separate projects share code, and you need to update something simple such as folder layout of that code, you now can’t change a folder layout without breaking all 10. You have to find every piece of code in the entire repo that references the code and update all of them. You might not even have access to them all as they may be owned by other teams. This means it takes bureaucracy to make a a change to reused code. If a design (mono repo) leads to a state where doing something as simple as moving files and folders breaks the world, how can you call that design a pro and not a con?
  • Transparency — It gives us visibility of code used in every project. We will be able to check all the code in a single place.
    (Shared)
    Why Shared? Well, with Micro Libraries, just because they are separate repos doesn’t mean they aren’t in one place. Whether you are creating your repos in public GitHub, GitHub Enterprise, BitBucket, Amazon, Azure, or wherever, you still have your code in one place.
  • Atomic changes/Large-Scale Code Refactoring — We can make a single change and reflect the changes in all the packages, thus making development much quicker.
    (True)
    This is true. If you want to change something that affects an entire repo, or even a handful of projects in a repo, you can do it faster in mono repo.Careful, however. While this is true, this breaks the O in SOLID. If a library has to update all its consumers, it probably isn’t doing something right in the first place. This is an architectural warning sign that your architecture is bad. A second issue is that this ability also means you can make sweeping breaking changes.
  • Better Visibility and Collaboration Across Teams
    (Shared)
    Why Shared? Because with Micro Repos everyone can still have read-only access to all repos. They can still know what other teams are doing.Tooling is what matters here. With GitHub, I can search for code across multiple repos. A dev doesn’t have to be in a mono repo to see if code already exists. In fact, repo names give you one more item to search on that mono repos don’t have, which can help search results be better in micro repos than in mono repos.
  • (False) Lowers Barriers of Entry/Onboarding — When new staff members start working for a company, they need to download the code and install the required tools to begin working on their tasks

    Why False? A mono repo does not do a new developer any favors. This actually is more of a con than pro. There is no evidence that this a pro, while there are evidences that it is a con. A new dev has to check out often gigs of code. The statement “they need to download the code and install the required tools to begin working” implies they need to download all the code. If you have 100 GB, or even 10 GB, is checking all that out easier when onboarding someone? What about overwhelming a new dev? With a micro library, a new dev can download one micro repo, which is smaller, making it quicker to see, read, understand, run tests against, and code against. A new dev can be productive in an hour with a micro repo. With a mono repo, they might not even have the code downloaded in an hour, or even in the first week. I’ve seen mono repos that take three weeks to setup a running environment.

     

  • Easy to run the project locally
    (Shared)
    This usually requires a script. In a mono repo, the script will be part of the mono repo. In a poly repo, you can have that script in a separate repo that a new dev can check out in minutes (not hours or days) and run quickly.
    -This is about tooling, and isn’t a pro or con of either.
  • (False/Shared) Unified CI/CD – Shared pipelines for build, test, release, deploy, etc.Why false? Because sharing a pipeline isn’t a good thing. That is a con. That breaks DevOps best practices of developers managing their own builds. How can a dev have autonomy to change their pipeline if it affects every other pipeline?

    Why Shared?
    This is about tooling, and really is not a pro or con of either mono repos or micro repos. You can do this with either. However, it is far easier to get CI/CD working with micro repos.

Cons of Mono Repos

I was surprised by how the cons have piled up. However, it is not just important to list a con, but a potential solution to con. If there is an easy solution, you can overlook the con. If there is not an easily solution, the con should have more negative weight.

  1. Fails to prevent decoupling – Nothing in a mono repo prevents tight coupling by default.
    Solution: There is no solution in mono repos to this except using conventions.Note: Requiring conventions is a problem. I call them uphill processes. Like water takes the easiest path, so do people. When you make a convention, you are making an uphill process, and like water, people are likely not to follow them. Downhill processes are easier to follow. So conventions require constant training and costly oversite.

    Because of coupling is only prevented by convention, it is easier to fall into the trap of these coupling issues.

    There are many forms of coupling

    1. Solution coupling
    2. Project coupling
    3. File system coupling
      1. Folder coupling – Many projects can reference other files and folders. With mono repos you can’t even change file and folder organization without breaking the world.
      2. File coupling  – Other projects can share not just your output, but your actual file, which means what you think is encapsulated in private or internal methods, might not be encapsulated.
    4. Build coupling – Break one tiny thing and the entire build system can be held up. Also, you can spend processor power building thousands of projects that never changed every build.
    5. Test coupling – Libraries can easily end up with crazy test dependencies.
    6. Release coupling – You can spend more money on storage because you have to store the build output of every library every time .
  2. Fails to Prevent Monoliths – By doing nothing to prevent coupling, it does nothing to prevent monolithic code
    Solution: There is no solution in mono repos to this except using conventions.
    Monoliths are not exactly a problem, they are a state of a base. However, Monolith has come to mean a giant piece of coupled software that has to built, released, and deployed together because it is to big to break up.Note: About doing nothing. Some will argue that it isn’t the repo’s job to do the above. I argue that doing nothing to help is a con. If someone is about to accidentally run over a child with their car, and you can stop it easily and safely, but you don’t, would you argue that doing nothing is fine because that kid isn’t your responsibility? Of course not. Doing nothing to help is a con. While a repository usually doesn’t have life and death consequences, the point is that failing to prevent issues is a con.
  3. All Changes Are Major – A change can have major consequences. It can break the world. In a large mono repo, you could spend days trying to figure out what your change impacted and often end up having to revert code often.
    Solution: None really. You can change the way you reference project, using package management, instead, which essential means you have micro repos in your mono repo.
  4. Builds take a long time
    Solution: None, really. If you change code, every other piece of code that depends on that code must build.
    – Builds can take a long time because you have to build the world every time.
  5. Mono repos cost more – Even a tiny change can cause the entire world to rebuild, which can cost a lot of money in processor power and cloud build agent time.
  6. Releases with no changes – Many of your released code will be versioned, new, yet has no change from the prior version.
  7. Not SOLID – Does NOT promote any SOLID programming, in fact, it makes it easier to break SOLID practices
    Breaks the S in SOLID. MonoRepos are not single responsibility. You don’t think SOLID only applies to the actual code, right? It applies to everything around the code, too.

    1. Because a repo has many responsibilities, it is constantly changing, breaking the O in solid.
  8. Increases Onboarding Complexity – It is just harder to work with mono repos as a new developer. One repo does nothing to easy a new developer’s burdens. In fact, it increases them.
    Solution: Train on conventions. Train on how to do partial check-outs and often dependencies prevent this
    – Developers have to download often gigs and gigs of data. With the world-wide work-anywhere workplace, this can take days for some offsite developers, and may never fully succeed.
    – Overwhelming code base.
  9. Security – Information disclosure
    Solution: Some repo tools can solve this, but only if the code is not coupled.
    – Easy to give a new user access to all the code. In fact, it is expected that new users have access to all the code.
    – Often, you have to give access to the entire code base when only access to a small portion is needed.
  10. Ownership confusion
    Solution: None.
    – Who owns what part of the mono repo? How do you know what part of a mono-repo belongs to your team?
    – Does everyone own everything?
    – Does each team own pieces?
    – This becomes very difficult to manage in a mono repo.
  11. Requires additional teams – Another team slows down build and deploy changes
    Solution: None, really.
    Team 1 – Build Team
    Tends toward requiring a completely separate Build team or teams.
    – A dev has to go through beuracracy to make changes, which . . .
    – Prevents proper DevOps.Note: DevOps Reminder – Remember DevOps means that developers of the code (not some other team) do their own Ops. If you have a Build team, or a Deploy team, you are NOT practicing DevOps even if you call such a team a DevOps team. If I name my cat “Fish” the cat is still a cat, not a fish. A build team, a deploy team; even if they are called DevOps, they aren’t. In proper DevOps the only DevOps team is the DevOps enablement team. This team doesn’t do the DevOps for the developers, the team does work that enables coding developers to do their own DevOps more easily. If the same developers that write the code also write CI/CD pipelines (or use alrady written ones) for both Build and deploy autation, and the developers of the code don’t need to submit a ticket to the DevOps team to change it, then you are practicing DevOps.

    Team 2 – Repo Management Team
    – No this is NOT the same as the build team.
    – Many large companies are paying developers to fix issues with their repo software to deal with 100 GB sized repos.
    – Companies who use mono repos ofte need a team to fix limitations with the software they use to manage their mono repo

Notice the list of cons piling up against mono repos. I’m just baffled that any one who creates a pro/con list wouldn’t see this.

Conclusion

The pros of mono repos are small. The cons of mono repos are huge. How anyone can talk them up with a straight face baffles me.

Warning signs that mono repos aren’t it isn’t all they are cracked up to be:

  • The most touted examples of success are massive companies with massive budgets (Google, Microsoft, etc)
    • Some of those examples show newer technology moving away from Monoreps
      •  Microsoft Windows is a Mono Repo
        • dotnet core has 218 repositories and clearly shows that Microsoft’s new stuff is going to polyrepo
  • A lot of the blogs for mono repos failed to back up their pros with facts and data
  • Some of the sites are bias (sell mono repo management tools)

Micro Repo

Poly Repo With
– Microlibraries in a single git repo system with only one code project and it’s matching test project
– Each Releases to a Package Management Systems
– Automated CI/CD pipelines
– Shared Repo Container (i.e. all repos in the same place, such as GitHub)
===============================================================================================
Warning signs it isn’t all it’s cracked up to be:
– Big O(n) in regards to repos – you need n repos
Note: Yep, that is the only warning sign. So if you can script something.

Pros

Again, we will only list pros that aren’t shared with a mono repo

  • Promotes Microservices and Microlibraries – Poly Repos promote microservices and microlibraries as a downhill process. Downhill means it is the natural easiest way to flow, and the natural direction leads to decoupling.
    • A microservice builds a small process or web service that can be deployed as independently
    • A microlibrary builds a small shareable library to a package management system for consumption.
  • Easy to pass Joel Test #2 – Can you make a build in one step? Every microlibrary can make a build in one step. And if one of them stops doing it, it is often a 1 minute fix for that one microlibrary.
  • Small repeatable CI/CD yaml pipelines as code
    • Because the projects are micro, the CI/CD pipelines can be their smallest.
      Note: This isn’t shared with a mono repo, as their CI/CD pipelines have to build everything.
    • They are also more likely to be reuseable.
      • You can use the same CI/CD automation files on all microlibraries
      • Almost every project can share the exact same yaml code, with a few variables
    • Easy to find repeatable processes with tiny blocks
    • Add a CI/CD pipeline to automatically update NuGet packages in your micro repos. This can also benefit your security, as your will always have the latest packages. When you use the correct solution, you start to see synergies like this.
  • Prevents coupling (non-code)
    • Prevents solution coupling
    • Prevents project coupling
    • Prevents file system coupling
      • file coupling. You can’t easily reference a file in another repo. You can copy it and have duplicates.
      • Prevents build coupling
    • Prevents test coupling
    • Prevents release coupling – New releases of libraries go out as a new package to your favorite package management system without breaking anyone. (see npm, maven, nuget, etc.)
    • (Only doesn’t prevent code coupling)
  • Builds are extremely tiny and fast –  Building a microlibrary can take as little as a minute
    • You can create a new build for a microlibary any time quickly
    • Builds often you spend more time downloading package than building.
  • Breaking a Microlibrary doesn’t break the world
    • It creates a new version of a package, the rest of the world doesn’t rely on it
    • With proper use of SemVer, you can notify your subscribers of breaking change for those who do need to update your package
  • Completed microservices and microlibraries can stay completed
    • A microservice or microlibrary that is working might never need to update to a new verison of a package
  • Promotes SOLID coding practices for tooling around the code
    – It follows the S in SOLID. Your repo has limited responsibilities and has only one reason to change.
    – O in SOLID. Once a project is stable, it may never change, and may never need to be built/released again.
  • Simplifies Onboarding – A new dev can be product day 1 (and possible even the first hour)
    – A new developer can check out a single repo, run it’s unit test, and get a debugger to hit a break point in about 5 minutes.
    – Promotes staggered onboarding, where a developer can join, be productive on day one for any given repo, and then expand their knowledge to other repos.
    – Any single micro repo will not overwhelm a new developer
  • Security – you can give a new developer access to only the repos they need access to.
  • Single Source of Truth – A microlibrary is a single source of truth. The code exists nowhere else. Because it is a microlibrary (micro implying that it is very small), there will usually be no more than one or two feature branches at a time where code is quickly changed and merged.
  • Promotes Proper DevOps – Devs can easily manage their own build, testing, and releasing to a package management system.
  • Transitioning to Open Source is Easy – If one micro repo needs to go to open source, you just make it open source and nothing else is affected. (Be aware of open source licensing, that is a separate topic.)
  • Ownership Clarity – Each repo has a few owners and it is easy to know who are the owners
  • New Releases only when changed – The micro repo itself has to change, not new releases.

Pros that are shared

  • Single Place for all your code – Storing all your repos in one repository system, such as GitHub can give you many of the benefits of a Mono repo without the cons.
  • Code reuse/Simplified Dependency Management – Each micro repo hosts a microlibrary that publishes itself to a package management system for easy code sharing
  • Better Visibility and Collaboration Across Teams – It is so easy to see when and by whom a change was made to a microlibrary.
  • Easy to run a project locally

Cons

  • Atomic Changes/Large-Scale Code Refactoring – Always hard to make sweeping changing anyway.
    Solution:
    You can script these changes. You often still can do this, but not with IDE tools. This is- Inability to change your repos in bulk without scripting it. What if you need to change all your repos in bulk?
    – 2 -things
    1. This might not be a con. You will likely never have to do this. I almost put this in ‘Cons that aren’t actually cons’.
    2. If you do need to do this, you can script this pretty easily. But you have to script it. So that is why I left it here in cons.
  • Doesn’t prevent code coupling – Just because you consume a dependency using package management, doesn’t automatically make your code decoupled.
    Solution: None. Mono repo had no solution either, but at least all other coupling (folder, file, solution, project, etc. is prevented)
    – You still need to practice SOLID coding practices.
    – However, because the repo is separate, it becomes much more obvious when you introduce coupling.
  • Big O(n) Repos – You need a repo for every microlibrary.
    – Can be overwhelming for a new developer to look at the number of repos.

Domain-based Repos

Domain-based Repos are another option. These are neither micro repos nor mono repos. If you have 100 libraries and 5 of them are extremely closely related and often, when coding, you edit those five together, you can put those in a single repo. For those 5 libraries, it behaves as a mono repo.

It is very easy to migrate from Micro Repos to Domain-based Repos. You will quickly learn which microlibraries change together. Over time, you may merge two or more microlibraries into a Domain-based Repo to get the benefits of atomic changes at a smaller level in a few domain-related micro libraries.

My recommendation is that you move to micro libraries and then over time, convert only the libraries most touched together into domain-based repos.

Example: My Rhyous.Odata libraries are a domain-based repo. However, I almost always only touch one library at a time now, so I’ve been considering breaking them up for two years now. It made sense during initial development for them to be a domain-based repo, but now that they are in maintenance mode, it no longer makes sense. Needs change over time and that is the norm.

Git SubModules

The only feature micro libraries doesn’t compete with Mono Repos on is atomic changes. With technology like git submodules, you may be able to have atomic changes, which is really only needed for a monolith. Everything is a microlibrary in a micro repo but then you could have a meta repo that takes a large group of micro libraries and bundles them together using git submodule technology. That repo can store a script that puts the compiled libraries together and creates an output of your monolith, read to run and test.

Conclusion

Micro Repo is the clear winner and it isn’t even close. Choose Micro Repos every time.

Once you move to micro libraries, allowing a small handful of Domain-based repos is totally acceptable.

Every Project is Different

There may be projects where a mono repo is a better solution, I just haven’t seen it yet. Analyze the needs of your project, as that is more important than this article or any other article out there.


Three divisions of code: Composition, Modeling, Logic

Every developer wants to write good, clean code. But what does that mean? There are many principles and practices to software engineering that help lead to good code. Here is a quick list:

  1. 10/100 Principle
  2. S.O.L.I.D. Principles
  3. D.R.Y. Principle
  4. Principle of least surprise
  5. KISS – both Keep It Super Simple and Keep It Super Small
  6. YAGNI
  7. TDD
  8. Interface-based design or principle of abstraction
  9. Plan for obsolescence
  10. Principle of Generality
  11. Principle of Quality
  12. Data models have no methods
  13. Encapsulation similar to Law of Demeter
  14. Big Design Upfront or BDUF (which causes too many projects to fail and should be replaced with an MVP sketch up front)
  15. MVP – Minimal Viable Product

You can go look up any of those principles. However, it is hard to understand all of them while in college or even in the first five years of coding. Even developers who have coded for twenty years struggle to understand which principle to apply to which part of code and when.

What parts of your code do you apply these principles to? No, “Your entire code-base” is not actually the correct answer.

That leads to the three general divisions of code:

  1. Composition
  2. Modeling
  3. Logic

There are some other divisions that don’t exist everywhere, such as a user interface or UI. Many web services and APIs and libraries have no UI, so it isn’t a general division of all code. However, the UI that falls into all three of the above divisions. There are many other such examples of divisions of code that aren’t as general.

Composition

Composition is how your code is laid out. Composition involves,

  1. Files, folders, & Projects, and location of language objects in them
  2. Libraries, packages, and references
  3. How a project starts up
  4. How language objects (i.e. classes in OOO) interact
  5. Determining how your logic is called
  6. Determining how your models are created
  7. Layering your application and how interaction occurs between layers

Pieces of composition have models and logic related only to their pieces.

Common Principles used in Composition

Many of the principles and practices focus around this area. Are you using the D in SOLID, Dependency Inversion or DI for short? If so, you are working in the area of composition. In fact, a “composition root” is a common term when discussing DI. Are you using Interfaces-based design? Law of Demeter? Many of these are all in the area of composition.

  1. 10/100 Principle
  2. S. I. D. or SOLID (Especially Dependency Inversion)
  3. D.R.Y.
  4. KISS (both)
  5. TDD
  6. Interface-based design or principle of abstraction
  7. Principle of Quality
  8. Encapsulation similar to Law of Demeter
  9. Principle of Generality
  10. BDUF (or better, and MVP sketch)

Why are some left out? Well, YAGNI can be super destructive when it comes to composition. When you first start an app, it never needs a DI container, right? But you will regret following the YAGNI principle with that every time! When it comes to composition, you are going to need it.

What is the state or your composition?

Some basic questions to ask yourself to see if you have good composition?

  • Do yo have a 1-step build?
  • Do you have a single file for a single class or object?
  • How does your program startup?
  • How does it create new instances of classes and other objects? With a DI container?
  • How does it configure settings? Is this clear, easy, and in one place? Can any class easily access those settings abstractly?
  • Can you create integration tests that run any one unit or multiple units of your code with all external systems DBs, Web Services, File Systems, etc, faked or mocked?
  • Do you find models and logic

When composition is done wrong or not understood, no amount of good modeling or good logic can save your code base. The composition has to be rewritten. Have you every heard these complaints?

  • The code is hard to work with.
  • The code is spaghetti code
  • I have no idea how this code works
  • Who wrote this code?
  • It takes forever learn this code base
  • It is a nightmare just to get this code to build

These are road-signs telling you that the composition has to be rewritten. Often, when someone has what would be called “bad” code, composition is what they are talking about. Vary rarely does a code-base need a complete rewrite. Often, it needs a composition rewrite. In fact, the entire movement to refactor to Microservices and Microlibraries is all about composition, or putting small bits of logic and modeling in one well-composed place because most monoliths did it wrong.

Modeling

This is the most obvious part of code. Creating models. Often they are simple data models, but modeling can get very complex (which you should avoid when possible because KISS, right?)

Almost anything can be modeled. A Person:

public class Person
{
   public string FirstName { get; set; }
   public string LastName { get; set; }
   public int Age { get; set; }
}

Models can be small or big, though smaller is better.

Usually models are simple and easy to use. However, if you break some of the principles, models become difficult and cumbersome. For example, if you break the Data models don’t have methods principle, your models are now both modeling and providing logic, which could indicate the model now has two responsibilities, so the Single Responsibility Principle is broken.

Interfaces are automatically excluded from code coverage. Since data models shouldn’t have methods, they should not have tests. You can mark them as so, for example, in C# you can give a model class an attribute [ExcludeFromCodeCoverage], and there is likely an equivalent for whatever language or testing framework you are using.

Models are very common and there can be very many in a code base.

 

What do you model?

It is hard not to answer everything here, but really, you don’t model composition or logic. You model things. Nouns. Now, an action can be a noun, so while you don’t model logic, you might model a List<T> with an Add action, as it is common to add things to a list, however, the act of adding is not part.

Nouns represented in code are modeled. Data represented in code is modeled.

Behavior is modeled. Behavior is best described by an interface. All interfaces are models (not to be confused with data model classes, which is another type of modeling). Interfaces may be for composition or logic classes, and while the concrete composition or logic class itself isn’t a model, the creation of an interface is the act of modeling a single responsibility’s behavior.

Common Principles used in Modeling

  1. I in SOLID
  2. Principle of Quality
  3. Data models have no methods
  4. KISS (both)
  5. Principle of Generality (Think generics, such as modeling a list or collection with List<T>)
  6. Encapsulation similar to Law of Demeter (If your model is small enough and is shared between layers, it should be OK to be public right? Interfaces should be public, right?)
  7. TDD (except in data models, unless you break the Data models don’t have methods principle)
  8. YAGNI – helps keep models small
  9. Avoid model nesting hell (not a principle you hear much, but is very specific to models)

Notice this has way fewer principles than composition? That is because it is easier to do and more natural to get right.

What is the state or your modeling?

Some basic questions to ask yourself to see if you have good modeling?

  1. Are your models small and simple?
  2. Do you follow the Data models have no methods principle?
  3. Do you limit nesting of models? (Or do you have a nesting nightmare?)
  4. Does the code’s separate layers have their own models? Or do you practice model translation between layers?
  5. Does the average model have no more than 10 properties or fields?

Logic

When we talk about if conditions, looping, algorithms, data manipulation, hitting servers, reading or writing data, we are talking about the logic area. This is the meat of you code. This is where and how work gets done.

Logic classes are very common and there can be very many in a code base. When you are writing methods and doing things, you are writing logic.

What is the state or your logic?

Some basic questions to ask yourself to see if you have good modeling?

  1. Is all your logic decoupled into small 10/100 principle following single responsibility classes?
    1. If not, then, Oh, wait, that is composition and your composition is bad. Keep the logic, fix the composition.
  2. Is your logic code 100% unit tested?
  3. Does your logic code have parameter value coverage?

If you have bugs in logic, it is almost always due to lack of testing. 100% code coverage isn’t always enough if you don’t have parameter value coverage.

Common Principles used in logic

  1. 10/100 Principle
  2. S. and D. of SOLID – all dependencies should be injected and not part of a single responsibility
  3. D.R.Y. Principle – Don’t repeat your logic
  4. Encapsulation – Just make all your concretes private and their interfaces public and have a factory that returns concretes of interfaces
  5. KISS – both Keep It Super Simple and Keep It Super Small
  6. YAGNI – Don’t wrote logic you don’t need (unless you are writing an API for other consumers, then anticipated needs, prioritize them, get early customer feedback on them, and add them)
  7. TDD – Extremely important here as logic must be tested
  8. Interface-based design or principle of abstraction (almost a repeat of the D in SOLID)
  9. Principle of Generality – Sometimes, when the logic can be generic it should be
  10. Principle of Quality – Yes, this applies to all three

 

 


Developer Training Wheels for Writing S.O.L.I.D. Code

When you first start riding a bike, most new bike riders (usually young kids) use training wheels. After a few months or years, those training wheels aren’t needed.

Similarly, when a developer writes code, he/she probably also needs training wheels. The difference is, that with code, you will almost always need these training wheels.

Note: After 20+ years of coding, despite a Master of Computer Science, despite being a Lead developer for over a decade, I still use these.

S.O.L.I.D. Training Wheels

S = Single Responsibility Principal

  • Summary: 1 piece of code has 1 responsibility. The inverse: 1 responsibility of code has 1 piece of code
  • Training Wheels:
    1. Follow the 10/100 Principle
      • Do not write methods over 10 lines
      • Do not write classes over 100 lines
      • If you have to change a class that already breaks the 10/100 Principle:
        • take your code out of that class and put it in a new class first so the original class is smaller
        • Check-in this refactor without your new code
        • make your changes in the new class
        • Check-in your new code
    2. Think smaller.
      1. The smaller the responsibility, the better. Keep breaking down the responsibility until it so small you can’t split it
      2. The smaller the coding block, the more likely you will spot repetition
    3. Models already have a responsibility of being a model, and therefore should never have logic as that would be a second responsibility.
    4. Only one object per file (class, interface, struct, enum, delegate, etc.)

Note: The above are also the Don’t Repeat Yourself (D.I.Y.) training wheels

2. O = Open/Closed Principal

  • Summary: If your code is marked as public, and you already have consumers the code (outside your solution), don’t break the code because it will break all the consumers code
  • Training Wheels:
    • Default to private or internal. Don’t make anything public unless you are sure it needs to be public.
      • Give your unit test projects access to internals. For example, in C#, us [assembly: InternalsVisibleTo(“YourProject.Tests”)]
    • For existing code, don’t change the code signatures (the way the class, or method is defined) of anything public
    • Yes, you can add new methods and properties

3. Liskov’s Substitution Principal

  • Summary: Any class that implements an interface or a base class, will work just as well as another class that implement the same interface or base class. Classes that inherit from such, also should work just as well.
  • Training Wheels:
    • Implement interfaces / avoid base class inheritance
    • Choose “Has a” over “Is a”

4. Interface Segregation Principal

  • Summary: Keep your interfaces small and single responsibility.
  • Training Wheels:
    • See the S in Solid and the 10/100 Principle.
    • A class with a max 100 lines should result in small interfaces. Might they need to be smaller, sure, but these are just training wheels.
  • It is better to have a class implement more than one interface than to have a large interface. You can also have interfaces inherit other interfaces.

5. Dependency Inversion Principal

  • Summary: A class doesn’t control it’s dependencies. A class depends only a interfaces, simple models
  • Training Wheels:
    • Use a Dependency Injection Container. For example, in C#, there DI container such as Autofac, Ninject, or the new one built into .Net.
    • All dependencies are injected as interfaces using constructor injection
      • i.e. Never use the ‘new’ keyword in a class that isn’t your one composition root or a DI module.
    • All interface/concrete pairs are registered with the DI Container
    • Almost never write a static. Statics are only for extremely simple utility methods, for example, in C#, very simple extension methods, such as string.Contains().
    • If you encounter a static in existing code
      • If it isn’t public, refactor it immediately to be non-static. If it is public, you can’t delete it, see the O in solid, so wrap it

6. Other Training Wheels

  1. Write Unit tests
    1. Unit Test are your first chance to prove your code is S.O.L.I.D.
      1. If you follow the above training wheels especially the S and D in solid, your code will be much easier to unit test.
    2. If you don’t write Unit Tests first (see TDD), at least write them simultaneously or immediately after each small piece of code.
    3. Use a mocking framework. For example, in C# use Moq.

Why use S.O.L.I.D. training wheels while coding?

Remember, these are training wheels. Just like training wheels on a bike helps you not crash, these S.O.L.I.D. training wheels will help you not crash your code (i.e. write unmaintainable code).

If you follow these training wheels, you will be amazed how:

  1. Your code is far easier to maintain.
  2. Your code is far easier to keep up-to-date.
  3. Your code naturally uses common design patterns even if you, the writer, haven’t learned about that design pattern yet.
  4. You code is testable.
  5. Other developers praise your code.

We are already in the age of Microlibraries

Early developers spoke of the ability to create small building blocks of code, and to reuse code over and over. This has happened. The proof is seen in many languages. In C# (dotnet), NuGet packages are used over and over again. In Javascript, npm loads thousands of libraries for any given web project.

However, there has been another move in this area that is beneficial that many people haven’t really taken the time to define. This is the move to microlibraries.

What is a Microlibrary?

A microlibrary is different from past libraries because they are:

  1. Smaller libraries that encompass less
  2. Larger libraries have been broken up and multiple microlibraries are now replacing a prior large library.

What you are seeing is S.O.L.I.D. principles applied to libraries. Many libraries broke the S in solid by having multiple responsibilities. However, the owners of many open source libraries have noticed this and have split their libraries into responsibilities.

Examples of Microlibraries

There are plenty of examples in Javascript and other languages, but the idea of microlibries can be best described by looking at dotnet core. Microsoft has adopted the idea of microlibraries, though I doubt they use the term yet. No longer does dotnet include everything, as it did with .Net Framework. Instead, the many pieces of dotnet are now microlibries that can be consumed from NuGet.

  • See https://github.com/dotnet and you will find well over two hundred separate repositories. Never before has dotnet been so decoupled. This is a glowing example of microlibraries.

For a personal example, I have created the Rhyous libraries that you can find on NuGet. I don’t have one giant Rhyous NuGet package. I have many Microlibaries.

  • Rhyous.Collections
  • Rhyous.EasyCsv
  • Rhyous.EasyXml
  • Rhyous.StringLibrary
  • Rhyous.SimpleArgs
  • Rhyous.SimplePluginLoader
  • Rhyous.SimplePluginLoader.Autofac (notice that this is an add-on to Rhyous.SimplePluginLoader, but is still separate as Autofac integration is a separate concern.)
  • etc . . .

We’ve always had libraries, what changed?

The move to microlibraries has been occurring for well over a decade. It just appears that nobody has put a name to it. The greatest enabler of microlibraries has been:

  1. The tooling around package management.
  2. The tooling around continuous delivery. Automated check-in, code-reviews, build, test, deploy, publish.

Package Management

C# has NuGet, javascript has npm, Java has maven. The ability to easily create, publish, find, and consume a package has made microlibraries possible.

Continuous Delivery

Microlibraries need to be published to package management systems. As the tooling has improved, the ability to automate this has simplified. Microsoft has Azure DevOps, GitHub (also now Microsoft) has it’s actions, and Appveyor also makes it easy. Not to mention many of the these tools provide the tooling free to open source projects.

Continuous Delivery as Code has become the norm. Even the most simple of open source projects (most of mine) can have an automated process for check-in, code review, build, test, and publishing a package with very minimal work. And that work is checked in to a build file (AppVeyor and Azure DevOps both use yaml) which can easily be copied and pasted to other small projects with only the strings (names and paths) of the build file changing.

The benefits of Microlibraries

The smaller the libraries, the easier they are to work with. Also, the easier they are to be complete and rarely touched. Smaller means every part becomes easier. Many smaller pieces means we more easily see the commonalities. This ability to see commonalities is what lead to the package management systems which lead to further shrinking libraries as management their inclusion in new projects became easier.

Code

Less code is easier to maintain. It is easier to refactor and change. Since the project is far smaller, the idea of refactoring to be solid and testable code is less daunting and soon the code is refactored, more testable. An example of this is with AutoMapper. They used to have everything as untestable static classes, but recently in a major release, they replaced their statics and now support dependency injection, making the library more solid and testable.

Adding new features becomes much easier with small projects.

Build

Build is smaller and easier. The entire build and test process is very small, allowing for feedback within minutes on code changes (i.e. pull requests).

A one-step build (one of the top items of the Joel test) is so much easier with a microlibrary.

Build decoupling. Have you ever heard of those builds that take an hour. Some are worse, and take four hours or even a day. Microlibraries solved this. Any given build pulls a lot of already built code, and builds only the minimal code needed. Your final application may not be a microlibary, but it may encompass many microlibraries. Your build should take minutes because instead of building everything every time, it uses already built microlibraries and only builds the code for you final application. If you still have four hour builds, you might want to take a look at how you can split your build into microlibraries.

Tests

Tests are less and easier. The more tests, the less bugs. The more likely a new change doesn’t cause regression.

Again, your final application doesn’t need to run all the tests for each microlibrary. The microlibrary’s own build and test process does that. That means you can focus your tests on your application, which is faster.

Learning / Onboarding

It is far easier to learn a small amount of code. It is easier to understand even without good comments and excellent test coverage. If those are missing, adding comments and unit test code coverage is not as overwhelming.

New developers can usually pick up a microlibrary and get the code, get it building, and tests running within an hour. Onboarded developers can be productive in the first week.

Conclusion

Microlibraries are not just the future, the are already the present. The benefits are so great, that any other method of releasing code seems antiquated.

 

 

 


Refactoring Existing Code To Be S.O.L.I.D.

There is a lot of code out there in all kinds of states. Calling code good or bad is relative. You can have good code that works poorly and bad code that works very well. The goal of course, is good code that works well. Let’s call it S.O.L.I.D. code. (From now one, we will just write ‘solid’.) Along with solid, good code is tested code.

One of the key determiners of whether code is solid or not is the 10/100 rule. If a class is larger than 100 lines or a method is longer than 10 lines, there is a very high chance that it is of breaking the S (Single Responsibility Principal) in solid. The second key determiner is whether you are doing the D (Dependency Inversion) in solid.

This article specifically focuses on class and methods that are huge. Thousand or more line classes and method that are hundreds of lines long. However, this process could work for classes that are only 200 lines but still need to be broken up. Usually such large classes and method are not solid or tested.

You might ask, why is this a problem? Well, if you code is never going to change, it isn’t. Working code that doesn’t need to be changed and is tested by being used for the past decade is totally legit code and can continue to solve problems. However, if the code needs to change, this is a problem. Because the code is not solid, it is often hard to change. Because it is not tested, it is impossible to be certain that your changes don’t break something that has been working for a decade.

If you are doing this as a project, such as a major refactor, you probably still want to do this one piece at a time for continuous integration purposes and so if something breaks, you have small changes.

Step 1 – Implement new practices

You need new practices that will lead the code from where it is to where it be solid. The purpose of these new practices is to stop perpetuating the creation of code that isn’t solid and begin to refactor the existing code to be solid.

  1. If a method or class breaks the 10/100 rule, your changes should decrease the line count.
    1. Exception: Every now and then, you break out a small piece of a class and you may end up with more lines because it cost more to use that tiny class than to break it out.
      Example of when this can happen:
      – You are only extracting two lines, and constructor injection (if you parameters are on line each) will add 1 line for the parameter and one line for private member variable and you still need 1 line to call the code, so two lines become 3.
      – When you don’t have constructor injection and you want to use dependency injection so you create a temporary lazy injectable property that resolves the newly created class. A lazy injectable property is usually 1 to 5 lines, so when breaking out small pieces, it can result in more. However, as this is a stop-gap until you have constructor injection, after which you will replace the lazy injectable property with constructor injection, it is acceptable.
  2. New changes must occur in a separate class.
  3. New changes must be unit tested with high code coverage and parameter value coverage.
  4. New changes should use dependency injection if at all possible. If not, they must make DI easier to implement in the future.
  5. Don’t stress about perfection.
    1. For example, don’t worry about breaking it out wrong. If you have a class or method with thousands of lines, it could be doing 50 responsibilities instead of 1. If you break it up but make a mistake and break out a class and do 4 responsibilities in your new class, while that technically is wrong as it is breaking the single responsibility principle, it is still better. 1 class now does 46 responsibilities, and 1 class does 4 responsibilities. If you move toward better constantly, you will get there.

Step 2 – Finding out what to refactor

There are many reasons code needs to be refactored. This is going to focus on a few of them.

  1. Reorganization (the models and logic are most fine, but the composition is not)
  2. Breaking up huge classes or methods.
  3. Refactoring static to not be static
  4. Breaking up your code into Microlibraries

Step 3 – Implement a Inversion of Control (IoC) framework

If you have not implemented an IoC framework, it is important to do this before you start refactoring. There are patterns that can help you get there without an IoC framework, but they aren’t recommended, as while they are an improvement, they are still problematic.

I recommend Autofac. This framework should be implemented in the entry point of your application. If you are a library, which has no entry, you don’t exactly have to worry about that. However, consider supporting an IoC framework with another library. For example, I have two NuGet packages: Rhyous.SimplePluginLoader and Rhyous.SimplePluginLoader.Autofac. They are split out, so they are following the microlibrary pattern (no more than necessary in one library) and it is easy for someone using the Autofac IoC framework to consume the module in Rhyous.SimplePluginLoader.Autofac to register objects in Rhyous.SimplePluginLoader. Also, if someone wants to use another IoC container, they could use the Rhyous.SimplePluginLoader.Autofac project as a prototype.

Implementing an IoC container is outside the scope of this article. But here are some tips. To begin using Autofac, here are some guides to getting started.

Note: While you can go one without this step, it will be a lot harder.

Step 4 – Breaking up Huge Classes or Methods

You might want to do this as one big project or you might want to do this for one single story. The below is the method to follow for both.

Find where the class is breaking the single responsibility rule

    1. Use #region #endregion to put lines of code around a group of code that is 10 lines or less.Note: If doing this as a project, you may want to group all the lines in a method or class. Try to find logical places and names of what the group is doing. The below shows three made-up lines that are clearly part of a group task. However, this represents a real life example that I have encountered. Notice the #region and #endregion tags. Add those.
      #region Group 1
      var a = something1FromThisClass.Do();
      var b = something2FromThisClass.Do()
      var x = performSomeActionOnA(a, b);
      #endregion
      

      Advanced: I have also encountered code where there may be a two groups code and often the order the code is called is mixed but can be unmixed.

      var a = something1FromThisClass.Do();
      var b = something2FromThisClass.Do()
      var c = something3FromThisClass.Do();
      var d = something4FromThisClass.Do()
      var x = performSomeAction(a, b);
      var y = performSomeAction(c, d);
      

      As you can see, the order is insignificant (and you must be sure the order is insignificant) so you can do this.

      #region Group 1
      var a = something1FromThisClass.Do();
      var b = something2FromThisClass.Do()
      var x = performSomeAction(a, b);
      #endregion
      
      #region Group 2
      var c = something3FromThisClass.Do();
      var d = something4FromThisClass.Do()
      var y = performSomeAction(c, d);
      #endregion
      
    2. Evaluate your code and find out if you have Dependency injection or not?
    3. For one group of lines, create a new class as described below, based on whether you have DI or not.
      Yes – You have Dependency Injection

        1. Create a new interface
        2. concrete class that implements the new interface.
          Note: Don’t break the 10/100 rule in the new class!
        3. Create a well-named method for the group of lines and add it to the interface
        4. Implement the interface in the concrete class
        5. Register the interface/class pair
        6. Unit Test the Registration (You do have unit tests for you DI registrations, right?)
        7. Inject the interface into the original oversized class

      No – You don’t yet have Dependency Injection (stop and go create it if you can. If you can’t, here you go, but know you are limiting your ability to refactor and improve code.)

      1. Create a separate internal method that uses Method injection to handle those ten lines
        Note: If their are more than 5 parameters, maybe your group of lines is too big otherwise it is fine.
      2. Create an Extension Method class.
      3. Move the new internal method into the new static class with a static method (I like to use Extension Method classes). Now, statics aren’t solid yet, but they are more solid. This is an intermediary step.
        Note: Static class and/or an extension method is not the end. It is a transitionary phase. The end goal is to have Dependency Injection, preferably in the constructor. However, you will find it very easy to convert a static class and/or an extension method to a Interface/Class pair once your system is refactored to have DI.
      4. In the previous class, create a Lazy Injectable Property and use it. I already have a guide on this: Unit testing calls to complex extension methods
      5. Now add a task to your backlog to add the beginnings of Dependency Injection to your project.
    4. Write unit tests for the new method (this should be similar whether you used DI or an extension method class).
      Note: Method parameters should be simple models or mockable interfaces. This may require additional refactoring.
    5. Get your code checked-in.
      Note: It is important to check in each small change so you can test each small change, deploy each small change, so if something does break, it is easy to troubleshoot. If you check-in all n grouped code changes at once, it will be n-times harder to know what change broke something.

      1. Pull Request, for build and unit tests, deployment to test environment and automated tests
      2. Code Reviews
    6. Repeat for each grouped set of lines of code; or end the refactor after one group if you are doing one story only.
    7. Keep repeating Step 4 as often as needed.

Notice that the result is that your new code is solid. The older code is likely not yet solid itself, however, it has one less responsibility and less lines. The problem is decreasing not increasing.

Follow this practice with each modification, and you will gradually see your code become more solid and your code coverage will increase.

Break large methods into a separate class

Let’s provide with an example. Let’s say you have a method that is huge:

SomeClassWithActionX.cs

public object MyHugeActionXMethod(object param1, object param2)
{
    // ... too many lines of codes (say 50+ lines of code)
}

We will use SomeClassWithActionX.cs in our steps below.

Now, when dealing with smaller pieces, everything becomes easier to test. So your first step will be to break this into smaller pieces. Once you do this, the refactoring needs will become easier to identify.

  1. Create a new class and interface pair:
    MyHugeActionXMethodHandler.cs

    public class MyHugeActionXMethodHandler : IMyHugeActionXMethodHandler
    {
        public object Handle(object param1, object param2)
        {
            // ... too many lines of codes (say 50+ lines of code)
        }
    }
    

    IMyHugeActionXMethodHandler.cs

    public interface IMyHugeActionXMethodHandler
    {
        object Handle(object param1, object param2);
    }
    
  2. Register the class/interface pair with your IoC container.
    Note: If you still don’t have a IoC container, and adding constructor injection creates a chain reaction too big to solve right now, then you can get away with an intermediary step of creating a lazy injectable inside the prior class that lazy instantiates your object. You should be familiar with Lazy Injectables from the previous step. Remember, it is a less than perfect pattern that can make you more SOLID but is still not all the way SOLID itself.
  3. Add all the using from the previous class, SomeClassWithActionX.cs, then Remove and Sort Usings.
    This is just a fast way to make sure all the using statements are where they need to be.
  4. Resolve Unresolved Dependencies
    1. Identify all unresolved dependencies.
      How do you know what are you dependencies. Well, everything left that doesn’t compile (or is underlined with a red squiggly in Visual Studio) represents your dependencies.
    2. If the dependency doesn’t have an interface, create one for it now (unless it is a simple model).
    3. Register the interface/class pair with your IoC container.
    4. Unit Test the new class and method, looking for ways to break up the 50 line method as you test.
    5. Inject the dependency in through the constructor.
  5. Resolve Static Dependencies
    1. Identify all static dependencies.
      How do you know what are static dependencies?
      You can find them by locating calls using the object name instead of an instantiated object. Example: Logger.Log() instead of _logger.Log().
    2. Fix those static dependencies to not be static now. See Step 5.
    3. Inject the dependency in through the constructor.
  6. Now the Unit Test for SomeClassWithActionX.MyHugeActionXMethod() will mock IMyHugeActionXMethodHandler and assert that the mock was called.

D.R.Y.

Don’t Repeat Yourself. Only write code once. I like to say that single responsibility goes two ways: a class should only have one responsibility, and one responsibility should be solved by only one class.

As you break your code into smaller blocks, you increase the likelihood that another piece of code already exists. Think of letters in the alphabet as an example. Imagine you have strings 100 characters long. What is the likelihood that you get duplicates? Extremely rare. Now if you decrease that to 10 letters per string, your chances of duplication increases. Decrease to three characters per string, and you will start seeing duplicates all the time. Similarly, as you break your code down to smaller blocks, the likelihood of seeing duplicates will increase and you can start deleting duplicate code in favor of already existing and well-tested code.

Not all code is immediatley app

Step 5 – Refactoring Static Classes

Static classes cannot be injected. There are entire groups of developers who will argue that allowing any object to be static was a mistake in the C# implementation and should have never happened. I am not to that extreme, but at the same time, they are correct for many statics.

Statics that you should refactor

  1. Any static that where mocking would make it easier to test
    1. Any static with business logic.
    2. Any static that touches another system
    3. Any static that two tests might call differently, such as any static with settings or state. You can’t test multiple states when using a static as by definition a static state can only have one state at a time and unit tests will share static states.

Statics that do not need to be refactored

  1. Some statics where mocking would make it harder to write unit test or would never need to be injected.
    1. Example: Simple Extension Methods (or other functional statics) – (Notice the word simple.) Would you wrap and mock string manipulation actions included in dotnet such as string.Contains() or string.Replace()? Would you wrap and ToList() or ToArray()? Of course, not. If your static extension method is similar, then it probably shouldn’t be replaced. Test your extension method and use it where ever. Tradeoff is that you have tight coupling to that class. But you have tight coupling to dotnet. So if you code is a library the extends a framework, don’t worry about it.

How to refactor a static class

A static class exists and you want to replace it. If it is private or internal you are free to replace it completely. Also if it is public with zero consumers outside your project and you can change all instances of its use in your entire project, then you can replace it completely, deleting the static. However, if it is a public class, with external consumers, replacing and deleting the static class would break the O in SOLID. Instead, the static class needs to remain for a version or two, though you can mark it obsolete for the next few versions. The following steps will allow you to either delete or keep it.

Method 1 – Wrap the static

This method is useful when you don’t have access to change the static class and refactor it yourself. For example, if this static class is in someone else’s library that you must consume. The following example will assume that you are calling a static class.

  1. Create a new class and interface pair:
    MyStaticClassWrapper.cs

    public class MyStaticClassWrapper: IMyStaticClassWrapper
    {
        public object SomeFunction(object param1) =&amp;amp;amp;amp;amp;amp;amp;gt; SomeStaticClass.SomeFunction(param1);
        public object SomeProperty { get =&amp;amp;amp;amp;amp;amp;amp;gt; SomeStaticClass.SomeProperty; }
    }
    

    IMyStaticClassWrapper.cs

    public interface IMyStaticClassWrapper
    {
        object SomeFunction(object param1, object param2);
        object SomeProperty { get; }
    }
    

    Note: While this only shows one method and one property, your wrapper may need multiple methods and could have properties. Remember to consider the I in SOLID when creating classes and interfaces.

  2. Register the class/interface pair with your IoC container.
    Note: If you still don’t have a IoC container, and adding constructor injection creates a chain reaction too big to solve right now, then you can get away with an intermediary step of creating a lazy injectable inside the prior class that lazy instantiates your object. You should be familiar with Lazy Injectables from the previous step. Remember, it is a less than perfect pattern that can make you more SOLID but is still not all the way SOLID itself.
  3. Identify all unresolved dependencies.
    How do you know what are you dependencies. Well, everything left that doesn’t compile (or is underlined with a red squiggly in Visual Studio) represents your dependencies.
  4. Identify all static dependencies.
    How do you know what are static dependencies? You can find them by locating calls
Method 2 – Refactor the static class to not be static

Recommendation: Start with a static class that references no other static classes.

  1. Create a copy of the static class.
    For example, if you class file is named MyStaticClass.cs, create a MyNotStaticClass2.cs.
  2. Find and replace the word “static ” (notice the space at the end) with nothing.
    This will remove all the current statics.
  3. Fix what is broken by removing the statics.
  4. Look for any references to another static class. If you find one, stop working on this class and go fix the dependent static class first. See the recommendation above.
  5. Create an Interface for the copied class: IMyNotStaticClass2.cs.
    1. You may have to create multiple interfaces.
    2. Apply “Step 4 – Breaking up Huge Classes or Methods” as often as needed.
  6. Keeping the static
    1. If you need to keep your static, you need access to a singleton instance of the class.
      1. Option 1 – Create a static singleton of an instance of IMyNotStaticClass2in MyNotStaticClass2.cs.
      2. Option 2 – Create a static lazy injectable property that gets a singleton instance of a IMyNotStaticClass2.
    2. Change all the existing code in the static class to forward to the singleton instance, so there is not logic remaining in the static class, only forwarders.
      1. public methods – forward to the singleton methods.
      2. private or internal method – remove as you shouldn’t need them anymore (they should have moved to the non-static class)
      3. public properties – forward to the singleton properties.
      4. private or internal properties – remove as you shouldn’t need them anymore (they should have moved to the non-static class)
      5. public fields –  convert to properties (which could break reflection, but that would be rare and usually not something to worry about) and forwarded to the new static class (or maybe put these settings in their own interface/class just for those settings and forward to that and then inject those settings into MyStaticClass2.
      6. private or internal fields – you should be able to remove these.
  7. Register the interface/class pairs with your IoC container.
    Note: If you are keeping your static class around, and you created a singleton in MyNotStaticClass2.cs, then register the singleton.
  8. Once the class is injectable, any other code that depended on the static, MyStaticClass, should instead be changed to inject an instance of IMyNotStaticClass2 into the constructor, and use that instead of the static.
  9. Add an obsolete attribute on the static class, to notify any consumers that the static class will soon be obsolete.
    Then after a version or two, you should be able to delete the static, as you’ve given fair warning to Api/Library consumers.

You now have more SOLID and testable code. You should have less lines per class and less lines per method, more closely adhering to the 10/100 rule. You should also see yourself becoming more DRY and reusing more and more code.

 

Step 6 – Replacing untested code with well-tested code

We are no longer in the world of “It must be built in-house,” but we instead in the world where microlibraries exist and are easily accessible through any package management system. Most code is now coming from elsewhere and we are adding tools for managing libraries, security issues in them, etc. So if you have a lot of untested code that can be replaced with well-tested code, then replace save yourself the hassle of refactoring and testing your code. Instead, replace it with the well-tested code.

How to find a replacement:

  1. Write down what the untested code is doing?
  2. Search packages management systems (such as NuGet for C#) for solutions that already do this.
  3. Validate that the license is an enterprise/commercial friendly license (avoid GPL in commercial apps like a virus.)
  4. Vet any 3rd party packages by check the solution for code coverage, and build systems, etc.
  5. Replace your code with the vetted 3rd party package

Check out all the Rhyous libraries at https://GitHub.com/Rhyous. Rhyous is me, and my code is usually well-unit tested, used in production systems, and have commercial friendly licenses.

Example 1: String to Primitive conversion

You likely have seen a lot of code to convert a string to an int, long, decimal, double, float, byte, or other primitive. Some C# code bases use Convert.ToInt32(someVar) without checks or handling whether that code throws and exception or not. Some handle the exception with int.TryParse() but then then they have to wrap it in an if statement, and will curly braces, have 4 lines of code every time they use it. Why reinvent the wheel. Use Rhyous.StringLibrary, which has well-tested code. Just use someVar.To<int>(defaultValue);

In C#, how to use Rhyous.StringLibrary

  1. Add the NuGet package to your project
    Or add it to the most commonly referenced solution so, if using new 2017 and later csproj format, everything that depends on it automatically gets it.
  2. Do a find of:
    1. “Convert.To” and you should find a lot of: Convert.ToInt32(someVar)
    2. int.TryParse
    3. long.TryParse
    4. Int32.TryParse
    5. repeat for every primitive
  3. Replace all found instances of Convert.To… and <primitive>.TryParse with Rhyous.StringLibrary’s exension method:
    1. someVar.To<int>(_defaultConstant)
      Note: Obviously, replace int with the appropriate primitive type and you should define the default constant.

Example 2: Parsing Json or Xml (Serialization)

You likely have code to read json or xml? If you wrote it a long time ago, you might be custom parsing the json or xml. Why? In C#, replace it with a serialization library, such as for json, Newtonsoft.Json or System.Xml.Serializer (and use Rhyous.EasyXml to make that even easier.)

I once replaced dozens of class files and thousands of lines of code with simple data models and xml serialization.

Example 3: Plugin Loading

It is hard to write code to load plugins well. To correctly work with the assembly to load a dll, its dependencies, which might require different versions of an already loaded dependency, without creating a performance issue in the plugin loader and do so stable. And wouldn’t you like to have your plugins support dependency injection? I bet you would. Which is why you shouldn’t write your own plugin-loading library, but use Rhyous.SimplePluginLoader, which was originally written in 2012, and spent ten years improving it, and has a high unit test code coverage.

Example 4: SystemWrapper

Find yourself constantly wrapping Microsoft code that isn’t unit testable. Use SystemWrapper which has already written wrappers and interfaces for most such code.

Example 5: Improving Microsoft’s Collection library?

Find yourself trying to wrap an interface around ConcurrentDictionary? Trying to work with collections more easily with few lines of code. Constantly using NameValueCollection (i.e. ConfigurationManager.AppSettings or similar) to get a setting or use a default value, then give Rhyous.Collections a look. It has many of extensions for collections


Why training is important!

Here is a funny comic to explain the importance of training, even when you think you already know everything.


Configuring Visual Studio to open the browser InPrivate or Incognito

Sometimes, when coding a web application in visual studio, you may want to have the project start in an InPrivate or Incognito window. Browsers, such as Chrome, Edge, Firefox, and others, have a special way to open them that is clean as in no cookies or history or logins and it isn’t tied to your normal browser session. This is called Private browsing. They each brand it a little differently, with Edge being InPrivate and Chrome using Incognito, but they are all private browsing.

Visual Studio can easily be configured to open the browser in private browsing.

Configure Visual Studio to Launch the Browser in Private Mode

  1. Open Visual Studio
  2. Locate your Asp.Net Application and open it
    or
    Create a new Asp.Net Project (you can throw away this project afterward)
  3. Once the project is open, locate the Debug Target icon, which is a green triangle that looks like a start icon:
  4. Click the drop-down arrow just to the right of it.
  5. Select Browse with:
  6. In the Browse With screen, click Add.
  7. Enter one or more of these values: (I entered both)

    Edge
    Program: C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
    Arguments: -InPrivate
    Friendly Name: Edge (InPrivate)

    Chrome
    Program: C:\Program Files\Google\Chrome\Application\chrome.exe
    Arguments: -incognito
    Friendly Name: Google Chrome (Incognito)

  8. Click OK.
  9. Now you can change the default if you desire.
    My default was set to Edge.

    To change the default, highlight the desired browser setting and click Set as Default button.
  10. Click Browse and your app will start in debugging and browse to the local url with your configured default browser.

Happy coding!

 

 


Microservices: Are they S.O.L.I.D., D.R.Y, and the Big O(N) problem

Whether the term microservice for you indicates a technology, an architecture, or a buzzword to land you that next dev job, you need to be familiar with the term. You need to know why there is buzz around it, and you need to be able to code and deploy a microservice.

Microservice Successes vs Failures

However, how successful are Microservices? A quick google search does not show promising results. One O’Reilly study found that less than 9% consider their microservices implementation a complete success. Most implementations report partial success at best. Why is this? Could it be that microservices are like any tool; great when used correctly, less than adequate when not. Remember, you can successfully pound a nail with a wrench, but a hammer is better and a nail gun is better than a hammer when coupled with power, a compressor, and enough space to use the tool. If you are building a tool that microservices isn’t suited for and you use microservices anyway because it is a buzzword, you are going to struggle and even if you don’t fail, you won’t have complete success.

Should you implement Microservices?

Should you be looking to implement microservices? Do you have a monolith that could be broken up with microservices?

This really depended on your architecture and what you think you mean when you say microservice. There is a breakdown in the industry in a clear definition of what is a microservice.

Is there a better alternative to a microservice? That answer depends highly on what you are trying to do.

Microservice Architecture Analysis with S.O.L.I.D.

The initial idea of Microservices is based on first of the S.O.L.I.D. principles. When looking at any one microservice, it fulfills the S in solid. But what about the other letters? What about other principles beyond SOLID, such as the Don’t Repeat yourself (DRY) principle or Big O? Do microservices still hold up?

Let’s do an analysis of some of these concepts.

S = Single Responsibility

The S in S.O.L.I.D. literally means Single Responsibility, which is the very premise of a microservice. A microservice should have a single responsibility. A microservice excels at this. Or it is supposed to. Implementation is where things can get dicey. How good is your development team at limiting your microservice to a single responsibly? Did you create a microservice of a micromonolith?

Theoretical Score: 100% – complete success

Implementation Score: 50% to variable – half the developers I interview can’t even tell me what each letter in S.O.L.I.D. stand for, let alone hold their microservice to it.

O = Open Closed Principle

The O in S.O.L.I.D. means Open for extension and closed for modification.

This principle is a problem for microservice architectures. The whole idea of microservices goes against this principle. In fact, Microservices are actually a 100% inverse of the recommendation made by the O in S.O.L.I.D. because microservices are open for modification and closed for extension.

If a microservice needs to be changed, you change it. Those changes automatically deploy.

Theoretical Score: 0% – complete failure

Implementation Score: 0% – complete failure

L = Liskov substitution principle

There terribly non-intuitive name aside, this principle means that if you substitute an parent object with a child, the code shouldn’t know or care that the child was used. You can now add substituting and interface with any concrete implementation and the code should just work regardless.

How do you do inheritance with a microservice? How do you substitute a microservice? You could create a child microservice that calls a microservice, but inheritance is just not a microservices concept.

Theoretical Score: N/A or 0% – complete failure

Implementation Score: N/A or 0% – complete failure

I = Interfaces Segregation principle

The I stands for Interfaces Segregation, which means you should have the minimal possible defined in any one interface. If more is needed, you should have multiple interfaces. A single microservice excels here as another principle idea of a microservice is that it has a defined interface for calling it and that it is a small (or micro) of an interface as possible. However, what if you need a slight change to an interface? Do you:

  1. Edit your microservice’s interface?
    You risk breaking existing users.
  2. Add a second interface?
    Doing this increases the size of your microservice. Is it still a microservice? Is it slowly becoming a mini-monolith now?
  3. Version your microservice interface in a new version, but keep the old version?
    This quickly can become a maintenance nightmare.
  4. Or create a completely separate microservice?
    Wow, creating a whole other microservice for one minor change seems like overkill.

Theoretical Score: 100% – complete failure

Implementation Score: 50% to variable – there is no clearly defined path here, you have to trust your developers do make the right decision.

D = Dependency Inversion

D means dependency inversion, which means you should depend upon abstracts and not concretes. Well, how do you do this if you are a microservice? What about when one microservice depends and three other microservices? And those other microservices are REST Apis? How do you depend on them abstractly?

This is a nightmare. The most difficult part of coding is depending upon external systems, their uptime.

Many developers and architecture will simply say that this is easy, just use queuing, messaging, a bus, but don’t make synchronous calls. If the system is down, it is down, regardless of whether it is synchronous or not. With synchronous calls, the caller can at least find out if a system is down immediately whereas with event-driven bus systems, this can be difficult to know. If one microservice is down, preventing a UI from displaying for a user, do you think a user cares whether you are synchronous or asynchronous? No. They care about clear messaging, which is harder to do asynchronously.

The efforts to solve this microservice conundrum often lead to an architecture that is far more difficult to maintain than the monolith. Remember, just because something is a monolith, doesn’t mean it was poorly architected.

Theoretical Score: 25% – extremely low success rate

Implementation Score: 25% to variable – there is no clear best practice here.

Other Tried and True Principles

Don’t Repeat Yourself (D.R.Y.)

Microservices don’t even try with this one. Even the top architects balk at the importance of this with microservices. Almost invariable, they recommend that you DO repeat yourself. With the packaging abilities of this day and age (Maven, NuGet, npm, etc.) there is no excuse for this. Duplicating code is rarely a good idea.

There are exceptions to D.R.Y. For example, Unit Tests. I duplicate code all the time because a self-contained test is better than a hundred tests using the same setup code. If I need to change the setup, I risk breaking all the tests using that setup, whereas if I copy my setup, then each test better stands alone and can better isolate what it is trying to test.

Do Microservices fall into the same bucket as Unit Tests? Unit Tests usually find bugs, but don’t usually have bugs themselves the same way production code does. Microservices aren’t like Unit Tests at all as they are production code. If you copy code to 10 production microservices and find a bug, fixing it in all ten places is going to be a problem.

Theoretical Score: 0% – extremely low success rate

Implementation Score: 25% to variable – there is no clear best practice here. An implementor could balance how much code is copied vs contained in packaging systems.

Big O

Microservices can crash and burn when it comes to Big O. Remember, Big O is how many times an action has to be done for a given thing or set of things, N, where N is a variable representing the number of things. If there are two sets of things, you can use multiple variables N, M. And for three sets, N, M, K (see the pattern, just keep adding a variable for each set of things). The action per set of things is often processor or memory or disk space, but it is not limited to those. It can be anything: IP Addresses, docker images, pipelines, coding time, test time.

Big O (1) is the ultimate goal. If you can’t reach it, the next best is Big O (Log n). If you can’t reach that, then you are at least Big O (N), which isn’t good. That means that your technology does NOT scale. Worse, you could be Big O(N * M) or Big O (N^2), in which case your technology slows down exponentially and scaling is impossible without a change.

What is the Big O for N microservices in regards to source control? Big O (N)

What is the Big O for N microservices in regards to CI/CD pipelines: Big O (N).

What is the Big O for N microservices in regards to docker containers? Big O (N)

What is the Big O for the number of terraform files (or whatever config you use for your deployment to your cloud environment of choice) for N microservices that you have to maintain? Big O (N)

What is the Big O for N microservices in regards to IP Addresses? Big O (N) – however, you can get to Big O (1) if you configure a intermediary routing service, but now all you’ve done is create a Big O (N) configuration requirement.

What is the Big O for microservices in regards to coding time? Big O (N) – remember, the recommendation from even the leading experts is to ignore the DRY principle and repeated your code.

What is the Big O for a mesh of microservices that have to communicate to each other? Big O (N^2)

A couple of places microservices shine in Big O are:

  1. Globally shared services. Example: How many NTP services does the world really need? Only one. Which is Big O (1).
  2. Microservice Hosts (Kubertnetes, AWS, Azure, etc) – these can provide logging, application insights, authentication and authorization for N microservices with a single solution, Big O (1).

The Big O of microservices is terrible and nobody is talking about it. Why have microservices gotten away with being Big O (N) for all this time? There are a couple of reasons:

  1. Automation has outweighed those concerns.
  2. Early adoption means few microservices, so Big O is not always a concern when there are only a few of something.
    Get past early adoption and start having a lot of microservices, and you will find you are in just as much of spaghetti hell as you were in with your spaghetti code monolith, only now it is harder to fix issues because they span multiple teams, across multiple environments. Wouldn’t it be great it if all those microservices were in 1 place? It was, before you strangled it away into microservices.

So when should you use Microservices?

Well, if you consider a Microservice to be a cloud RESTful service, for cloud-delivered solutions, then microservices are probably going to have a higher success rate for you.

If you are installing on Desktop/Laptops/Mobile Devices, then microservices, as they are defined, are not the best solution. However, that doesn’t mean you should have a spaghetti code monolith. No, if you are installing an application (not just a link to a cloud website) then please, keep your monolith, only instead of breaking it up into microservices on docker containers, look to follow S.O.L.I.D. principals, break it up.

Theoretical Score: 15% – unless we are talking about a global service, where, in those small instances, they are 100%.

Implementation Score: 10% to variable – An implementor could use shared CI/CD pipelines, terraform files with variables (but most are that mature yet). Some might use only 1 public IP, but they still need N private IPs.

The future is bright. As many of these Big O issues are solved, which will come with maturity, microservices will naturally become more attractive.

What microservices are good for?

Single-responsibility shared services

A Network Time Protocol service is a great example of one that should be a microservice. It has one responsibility and one responsibility only. We could have 1 instance of it for the whole world (notice that suddenly made this Microservice Big O (1), didn’t it?). However, distance is a problem, so the United States needs its own, Europe needs its own, and China needs its own. It doesn’t have to be separate code, just the same code deployed to multiple cloud regions.

Many services for cloud products can be single-responsibility shared services, which is why microservices target cloud products so well.

Elasticity

The ability to have a microservice auto-deploy additional instances of it, often in different regions, to support scaling.

What are Microservices NOT good for?

Services that every customer needs their own instance of

Not all services are shared. Some services need to be custom per customer. Microservices are not good for these. Especially if it is a pack of services.

On-Premise software

Microservices are best designed for cloud solutions or internal only integration services. If you sell software that a customer should install on-premise (on-premise means on one of their systems in their environments), microservices are not a good option.

Everything could be in the cloud but not everything should be in the cloud.

  • Desktop Applications and Suites such as Microsoft Office, Adobe Creative Suite. Sure there are cloud versions of these, but desktop apps work best as stand-alone desktop apps. That doesn’t mean they can’t integrate with a microservice, but they shouldn’t require a microservice to function (many apps still need to work without internet).
  • Networking and other security software: VPN software, desktop management software, or large applications that for many reasons shouldn’t be in the cloud.

You don’t want customers to have to deploy 100 docker containers to install your software on-premise. You just don’t. That doesn’t mean you couldn’t have a single cohesive system that includes microservices all installed on the same server, but the point is, those microservices are by many definitions not microservices if they are on the same server. Instead, they become a cohesive but decoupled single system.

Dark Network Environments

The definition of Dark Network means no access to the internet. That doesn’t mean these environments could have their own internal clouds, with microservices, but chances are, if they don’t have internet access, they won’t need to be accessed by a billion people and need to be elastic.

UI experiences

Like it or not, microservices architecture can degrade the UI experience. Why? Because microservices are usually asynchronous and event-driven. Microservices, especially asynchronous event-driven ones, often make the UI harder to code because you have to call service but you get no response. You then have to code the UI to go obtain the response from an event. This also increases debugging time. Some people say a synchronous microservice is not a microservice. If that is true, then all microservices make the UI harder to code and debug. If microservices make UI code harder, that is a significant con that every implementor should be aware of.

No matter who makes the claim that microservices are 100% decoupled, they are wrong if a UI requires that microservice. If Service X is required by a UI, the UI is coupled to it being up. It doesn’t matter if it is a microservice that fails gracefully or a monolith that crashes without grace. If a customer is in the UI and they can’t do something because a service is down, that service is a dependency, and the belief that changing a UI’s dependency to a microservice solves this is just false. If the UI doesn’t work, it doesn’t work. Just because the code itself isn’t coupled doesn’t mean a UI’s functionality isn’t tightly coupled to a dependent microservice’s existence and uptime.

Options beyond Microservices

Microservices are here to stay and are a great tool for the right uses. But they are not a swiss-army knife. They are best for delivering cloud solutions or taking processing off the client in desktop/mobile apps.

What are some alternatives to Microservices?

  1. A cohesive but decoupled single system may still be the right solution
    Note: What is the difference between a monolith and a ‘cohesive but decoupled single system’? Answer: the lack of tight coupling. A single system without tight coupling is not a monolith. If your system is tightly coupled, it is a monolith. If it is not tightly coupled, it is a ‘cohesive but decoupled single system’.

    1. A well-architected system that is highly decoupled is not a problem.
      1. Don’t fix it if it isn’t a problem.
      2. Don’t fix it just because microservices bigots name-call it a monolith instead of the ‘cohesive but decoupled single system’ that it is.
        Note: Some cloud enthusiasts use monolith as a bad word. It isn’t. Some can be prejudiced by their cloud enthusiasm, but you should know that older developers are just as prejudiced by their monoliths, well-architected or not.
    2. If an existing monolith is poorly architected, you may want to simply update the architecture to be a single cohesive but decoupled system instead of scrapping it entirely for microservices. The strangler pattern can work just as well to create a cohesive but decoupled single system as it does for creating microservices. You might even use single responsibility services (I didn’t say microservice because by some definition they aren’t microservices if they share a system) in your cohesive but decoupled single system.
  2. Multiple shared cohesive systems,
    1. Perhaps you can split your system into 50 microservices, or you can have 3 cohesive systems housing 15-20 services (which could be microservices that share a system) each.
  3. Plugin-based system design – You don’t have to be a microservice to get the benefits of decoupled and microcode.
    The strangler pattern works just as well for moving your code to decoupled microplugins as it does for microservices.
    How is this different from a cohesive but decoupled single system? It uses plugins whereas a cohesive but decoupled single system doesn’t have to use plugins.
    Note: This is my favorite solution. 99% of the benefits of microservices, 100% SOLID, and far fewer drawbacks.

Your code should have a Single Responsibility and vice-versa a single responsibility should have a single set of code (If you have three pieces of code that all have a single responsibility but they all have the same single responsibility, you are not S.O.L.I.D.). Look at interfaces, dependency injection, and please look at plugins. Plugin-based technology gives you almost everything you get for microservices.

Conclusion

Microservices can be a great tool or the wrong tool. Chose to use it wisely.

 

Note: This is obviously a highly summarized blog article, so please feel free to share your opinion and nit-pick as that is a form of crowdsourcing and is how blog articles get better.


A Cloud in a Box: My prediction of the Cloud, Data Center, Kubenetes, Quantum Computing, and the Rasberry PI

Do you remember when the first computer took up the size of a room? I predict that we will say something similar about the data center.

In the 2030s, we will say, Do you remember when a data center was the size of a building?

Technology developments

It won’t be long before we can buy a 1U (rack mount size) data center. How? We aren’t that far away. Let’s just combine a few technologies:

  1. Quantum computing. Did you read about Google’s breakthrough? https://phys.org/news/2019-10-google-quantum-supremacy-future.html
  2. Rasberry PI and similar devices, only smaller. Have you seen the size of a Raspberry PI Zero.
  3. Also, look at Microsoft’s Azure in a backpack.

The server terminal pattern

Also, have you noticed this pattern – as the client or on-premise device gets more powerful, more runs on the client.

Main Frame <————–> Dumb terminal

Web Server <————–> Desktop PC (Browser becomes Terminal)

Web Server <————–> Desktop PC (Browser runs code that used to run on the server)

The Cloud    <————–> Mobile device
Data Center

The pattern is this: What is on the server, eventually moves to the terminal. And the terminal gets ever smaller.

The Internal/External Wave

Now, there is also a wave where hardware started in house, moved out into Hosting services, then moved back in-house when internal data centers became easy, then moved back out when cloud was large and difficult to manage.

Once cloud is easy and smaller, that wave will move back in-house.

The future: The cloud in a box

Imagine that we have a micro server, a Rasberry PI type of device, only it has a quantum processor and is the size of a Micro SD. It has metal connectors and slides into a bus on a 1U server. The 1U server bus holds 100 x 200 of these small micro servers for a total of 20,000 servers in 1U of space.  Each PI has 1 TB of space.

Now these are small and easy to host internally. A company can easily host one of them or put one in US East, US West, Europe, and Asia, and anywhere needed.

This is a cloud in a box.