Making sense of COVID-19 Statistics

Some people are led to believe that 5 in 100 people (5%) are dying. When you go to a site like this: https://www.worldometers.info/coronavirus/, that certainly could appear to be the case at first glance. But is it? Let’s look at the data to assure you that 5% of people are not dying. The data shows that it is not even close to 5%. Are 5% of those sick with COVID dying of it? No, the data says that isn’t accurate either.

We are getting closer to 8 billion people in the world, and if 5 out of 100 died, that would mean that 400 million people would be dead or are going to die. COVID hasn’t killed that many, nor does the data suggest that it will kill that many.

Let me help you with the data and numbers, so you aren’t overly scared. You can be adequately scared, but you shouldn’t be overly scared because you were either didn’t understand the data or the data was misrepresented to you. Is COVID bad? Sure. It has killed an estimated 460k people as of today, so yes, it is a bad sickness, especially for those who experience the worst symptoms, but bad is not something data determines, it is something the reader of the data determines.

So when creating reports, it is all about proper data. Grouping the data is important. So let’s group the data first because that action alone will give us a clearer picture.

FYI (I am going to avoid any complex algorithms, and stick with basic math so anyone can understand this. However, feel free to comment with more advanced statistics.)

Grouping Humanity in the Statistics

So let’s create groups of people so you can properly understand the statistics.

  • All humans. All 8 billion of us. This group isn’t very useful in our data, so let’s break it up into smaller groups.
    Note: The estimates are 7.8 billion, but I’m going to round up because recent studies in third-world countries have proved the population estimations in high-density areas have been low.

    • Group 1 – Has died from COVID.
    • Group 2 – Has been sick enough to go to the hospital, get tested, and tested positive for COVID but hasn’t died.
      We have a limited number of tests, so you only get tested if you are sick enough.
    • Group 3 – Has COVID, goes to the hospital, but symptoms are not severe enough to be tested, so the individual is sent home.
    • Group 4 – Has COVID but is asymptomatic or has such minor symptoms that the individual never went to the hospital.
    • Group 5 – Has been exposed to COVID but did not catch it.
    • Group 6 – Has never been exposed to COVID.
    • Group 7 – Has COVID and dies from it without ever going to the hospital or being diagnosed.
    • Group 8 – Died without COVID and was misreported (either accidentally or intentionally).

So now we have some valid groups, we can start to understand the data correctly.

There are other ways to group humanity. We can group it by age groups. The CDC does this, which is nice, as we can get information by age group.

Percentage Chart

PercentageQuantity
100%8,000,000,000
10%800,000,000
1%80,000,000
0.1%8,000,000
0.01%800,000

What is the 5% (5 out of 100) statistic about?

Who are 5 in the 5 in 100 (5%) statistic who die? That is Group 1 (those who died from COVID)

Who are the 100? That is the sum of Group 1 and Group 2 (Those sick enough to get tested and tested positive). That means the number of Group 1 added to the number of Group 2.

Group 1 / Group1 + Group 2
460,671 / 460,671 + 8,241,079 = 0.05294 or 5%

(Note: The above is based on data pulled 6/19/2020 from https://www.worldometers.info/coronavirus/)

It isn’t 5% of all humans

When you hear 5% are dying, many people are led to believe that includes 5% of all humans. Clearly, the data shows this is not the case. As mentioned earlier, 5% of all humans would be 400 million. Only 8 million have caught the disease. So the data shows that only 0.1% of the population has been reported on the statistics as having COVID.

Group 1 and Group 2 only don’t even represent a full percentage of the people in the world. No, Group 1 and Group 2 together are only 0.1% of the 8 billion people in the world. 99.9% of the world is excluded from Group 1 and Group 2.

5% of 0.1% have died. That means of all the people on earth, COVID is causing a death rate of .005% of people on earth, assuming accurate data.

It isn’t 5% of those exposed

If a person understands it isn’t all humans, that person may still be led to believe it is all humans who have been exposed to COVID. Again, that is not the case. Most people who are exposed don’t catch the virus.

Let’s look at some examples of exposure that are facts, and use those factors to extrapolate data.

The Jazz have 15 players on their roster. Rudy Gobert and Donovan Mitchell both tested positive for COVID. Every single player on the team was exposed often.

(Here is an opinion, not fact, I will share: Had Rudy Gobert not been a star athlete for an NBA team, both he and Donovan Mitchell would likely have been in Group 4, but due to the power and wealth of the NBA, a minor sniffle as Rudy Gobert called it was enough to get the whole team tested. This is good for our stats as we have some numbers we can use to extrapolate data from.)

Can we be certain that all 15 players on the team were exposed to COVID? Yes. How often? A lot (sorry, this number is hard to get, so lets use numbers we have). Rosters have 15 players and 1 head coach. There are assistant coaches, staff, medical professionals, trainers, etc. Let’s only count the 15 players and 1 head coach and 4 assistant coaches for a total of 20 individuals.

  • Number exposed: 20
  • Number who caught COVID: 2
  • Number who were asymptomatic: 1
  • Number with minor symptoms: 1
  • Number exposed but didn’t catch it: 18

So extrapolating from the Utah Jazz experience, only 2 in 20 exposed caught it. That means you have a 10% chance of catching it, right? Wrong! It is much, much lower than 10%. We used simple numbers, the 15-player and 5-coach NBA team for that calculation. But everyone exposed is a hard number to quantity. Rudy Gobert and Donovan Mitchell surely exposed many, many more people in the Jazz organization and in their personal lives who didn’t catch it. The estimate is likely ten times the number of Jazz team members. So the chance of catching it when healthy and when exposed is significantly lower than 1%.

If we estimate very conservatively (extremely low estimate) that every person exposes at least 20 others, then the number of exposed is 20 times greater. That takes the death rate of those exposed from 5% to .2%. But that was a conservative number. It is likely that those with COVID exposed many more than 20 people. COVID is supposed to have a long gestation period and be contagious before symptoms, meaning the average person could expose hundreds of others. That puts the death rate at far less, closer to .04% to .02% of those exposed.

Now, also take into account the number of interactions? How many times did Rudy Gobert and Donovan Mitchell interact with the rest of their team? Well, in a single practice, there are over 100 interactions, between bumping, fouling, defended, communicating, sharing the same ball, etc. The contagious period is supposedly weeks. That means that 13 players and 3 coaches didn’t get the sickness despite likely hundreds (perhaps thousands) of interactions. This exact number of exposures is hard to quantify. But let’s conjecture that there were 1000 exposures. That means 18 players were exposed 1000 times and didn’t get it. That means your odds of being exposed and getting it is somewhere less than 1 in 1000, or .1%.

Now, the Jazz are a healthy group. Not all statistical variations are in your favor. You can be sure that a person with an unhealthy lifestyle is more likely to get sick than a healthy Jazz player. Remember, the Jazz have a lot of people in their organization that had many interactions with the Rudy Gobert and Donovan Mitchell and didn’t get sick, suggesting (but not proving) that extreme healthy lifestyles vs average healthy lifestyles may not play a significant part in whether the disease is caught or not.

It isn’t 5% of those who have COVID

We have limited testing. So only those with the worst symptoms are getting tested.

  • What about Group 3 (Has COVID, goes to the hospital, but symptoms are not severe enough to be tested)?
  • What about Group 4 (Asymptomatic).

How many of these are there? Well, extrapolating from the Utah Jazz, two out of twenty on the team would have been in these groups. So nobody in the Jazz would have been in Group 1 or Group 2, which means that those exposed who reach Group 1 and Group 2 combined is less than 1 in 20. It appears that for every 1 person sick, there are at least 2 that are asymptomatic or have minor symptoms. That means that the death rate of those who catch COVID is at least 1/3 the reported value.

It is only 5% of those sick enough to be tested?

The 5% death rate is only among those who both caught COVID (which the data show is a rare .1% of the population) and also have symptoms bad enough to be tested. This only includes Group 1 and Group 2.

The data clearly shows that 5% of those with the worst symptoms are dying.

What are your odds of catching COVID?

Well, right now 0.1% of people in the world are listed as having it. Looking at the Jazz extrapolation, with hundreds of interactions, few people caught it. So any given person could have hundreds of interactions with a contagious COVID carrier and not get it.

The numbers aren’t there, but if 0.1% have it, and you can interact with them a hundred times without getting it, then that puts the odds around .001%. But these odd don’t take into account the type of interaction.

Is the data accurate?

No. Sorry. The data is what we have. It isn’t accurate. How inaccurate is the data? It is next to impossible to know.

Let me list you some inaccuracies:

    1. Many people never go to the hospital. People could be dying without diagnoses. How many? There is no way to know. But this is rare, and it wouldn’t significantly change the numbers
    2. What about Group 7 (Died of COVID but never went to the hospital). This number is in favor of deaths being higher.
  1. The US has 50 states reporting in different ways. Not to mention every hospital may be reporting differently. then look at the rest of the world. All the countries in the world.
  2. Various governments have tainted the data by offering money to facilities that report deaths by COVID. This leads to Group 8. Money is a huge motivator. It always invalidated data by creating bias.
    FactCheck.Org clearly indicates that the payments are a fact. It claims that there is no evidence of misreporting (however, this is evidence of misreporting (see here), yet other articles discuss guessing without tests but also mention Group 7.
  3. The COVID testing isn’t even close to 100% accurate. Some reports are that is as low at 60% accurate. That makes this data highly inaccurate.
  4. In a care center for the elderly, if one person gets sick with COVID, all deaths in the care center are COVID deaths without further testing.
  5. There are more people listed as having COVID than tested that have been created, proving that many are guessed to have it based on symptoms. UPDATE: The CDC is now listed the count confirmed vs not comfirmed.
  6. They can’t keep the has COVID vs has the antibodies numbers straight.

At best, the data is just less accurate than an estimate. At worst, the data is only a little more useful than a wild guess.

Fact vs Opinion: Only facts and data matter in statistics. Opinions are not helpful. Guesses are not helpful.

Facts

  • Hospitals are getting paid to report COVID deaths
  • Testing is not a requirement to be added to the statistics. The following CDC guidelines state: In cases where a definite diagnosis of COVID–19 cannot be made, but it is suspected, …, it is acceptable to report COVID–19 on a death certificate as “probable” or “presumed.” https://www.cdc.gov/nchs/data/nvss/vsrg/vsrg03-508.pdf. OK, so not all COVID deaths are tested. This goes both ways, though: Some who die from COVID aren’t marked, some who die without a COVID test are marked as having COVID.

Opinions: (these don’t matter in statistics)

  • Hospitals are (or aren’t) abusing the COVID death payments.

Guesses:

  • The number of deaths is likely higher
  • The number of deaths is likely lower

The numbers are skewed heavily by age

90% of all COVID deaths are over the age of 65, according to this data (https://data.cdc.gov/NCHS/Provisional-COVID-19-Death-Counts-by-Sex-Age-and-S/9bhg-hcku).

So that 5% (which is only of the more symptomatic and sickly) becomes 0.5% if you are 52-64.
It becomes 0.2% if your age is between 35 to 44.
Ages 25 to 34, your chances of dying are 0.00029%.
15-24, you are 0.000052% of dying from COVID. Yes, there are four zeros after the decimal point.
Ages 5 to 14, there are only 13 deaths in the US. The Flu has 48.
Under 5, also has 13 total deaths in the US.

The numbers are skewed by Location

In Utah (my state), the number with COVID is 17,462 to only 158 deaths. That is only .9% and not 5%.

Perspective

In this section, we will give you stats of other issues you haven’t been worried about, so you can compare COVID to them.

  1. Pneumonia deaths are still higher than COVID deaths in all age categories. Keep in mind that Pneumonia also benefited from the social distancing, so the comparison is fair.
  2. Lightning deaths last year in the US is listed at 20. Ages 5 to 14 have only 13 COVID-related deaths. Your children are as likely to die from lightning as from COVID.
  3. The Flu killed 46 children ages 5-14 this year, to COVIDs 13. A child is 3-1/2 times more likely to die from the flu. Again, the Flu benefited from social distancing the same as COVID. However, at ages 75-84, COVID is listed at while the Flu is list as only 1,330. The older you are, the more you should be much more concerned about COVID than the Flu.
  4. Car Accidents killed 36,560 people in the US in 2019. COVID has killed a similar amount, 34,435, of people ages 85 and older in 1/2 a year. That is scary for the elderly. However, everyone under 54 years old is more likely to die from a car accident than COVID.

So one of the reasons this article was written was to help you calm down and realize it isn’t so bad. Well, if you weren’t freaking out about Pneumonia, Lightning, the Flu, or Car Accidents, then you probably don’t need to freak out about COVID. However, knowing there is a second virus as deadly as Pneumonia is disheartening.

The data shows that worrying for your children is completely not necessary. If you aren’t worried about your kid being hit by lightning, then you probably don’t need to freak out about COVID.

Zero evidence of a child passing COVID

To date, there is no evidence of children spreading COVID-19. Despite the fact that tracking where and from whom a person contracted COVID is high priority, there is still not one reported instance of a child spreading it.

Even despite the few children with COVID having significant exposure to others, there is not 1 single reported transmission from child to anyone else.

Despite numerous studies, opinions appear to rule statistics here. There is an opinion that children will spread COVID-19. The data does not support that opinion.

0 verified transmissions from a child is a statistical fact.

Conclusion

The data is presented in a way that is misleading. It is presented as 100% trustworthy data when it isn’t anywhere close to trustworthy. The scariest stats are being cherrypicked. It may or may not be intentional. Such suppositions are not what this article is about. This article is about helping those who are smart but not trained in data analysis get a better view as to what the data means.

The death rate of COVID for the earth’s population is around .005%.  .005% is a very different number than 5%.


A Cloud in a Box: My prediction of the Cloud, Data Center, Kubenetes, Quantum Computing, and the Rasberry PI

Do you remember when the first computer took up the size of a room? I predict that we will say something similar about the data center.

In the 2030s, we will say, Do you remember when a data center was the size of a building?

Technology developments

It won’t be long before we can buy a 1U (rack mount size) data center. How? We aren’t that far away. Let’s just combine a few technologies:

  1. Quantum computing. Did you read about Google’s breakthrough? https://phys.org/news/2019-10-google-quantum-supremacy-future.html
  2. Rasberry PI and similar devices, only smaller. Have you seen the size of a Raspberry PI Zero.
  3. Also, look at Microsoft’s Azure in a backpack.

The server terminal pattern

Also, have you noticed this pattern – as the client or on-premise device gets more powerful, more runs on the client.

Main Frame <————–> Dumb terminal

Web Server <————–> Desktop PC (Browser becomes Terminal)

Web Server <————–> Desktop PC (Browser runs code that used to run on the server)

The Cloud    <————–> Mobile device
Data Center

The pattern is this: What is on the server, eventually moves to the terminal. And the terminal gets ever smaller.

The Internal/External Wave

Now, there is also a wave where hardware started in house, moved out into Hosting services, then moved back in-house when internal data centers became easy, then moved back out when cloud was large and difficult to manage.

Once cloud is easy and smaller, that wave will move back in-house.

The future: The cloud in a box

Imagine that we have a micro server, a Rasberry PI type of device, only it has a quantum processor and is the size of a Micro SD. It has metal connectors and slides into a bus on a 1U server. The 1U server bus holds 100 x 200 of these small micro servers for a total of 20,000 servers in 1U of space.  Each PI has 1 TB of space.

Now these are small and easy to host internally. A company can easily host one of them or put one in US East, US West, Europe, and Asia, and anywhere needed.

This is a cloud in a box.


Git Cheatsheet

Clone

git clone <path or url to repo>

Create an empty repo

git init

Check if upstream has updates

git fetch

Switch to another branch

git checkout mybranch

Pull upstream updates

git pull

Add a file

git add filename

Move a file

git mv sourcefile destinationfile

Note: You can move a directory or source file or destination file can include directories.

Delete a local branch

git branch -d mybranch

Status

git status

Revert uncommitted changes to a file

git checkout path\to\file.ext

Remove all untracked files

This makes the repository clean again.
Do a dry run first with -n.

git clean -n

Then do it for real with -f.

git clean -fxd

git diff

git diff

git merge

git merge myBranch

Take all upstream source files

git checkout --ours .
git add .

Keep all local files

git checkout --theirs .
git add .

Abort the merge

git merge --abort

git rebase

git rebase master

Reuseable Building Block Development

If you haven’t paid attention to the development world, you might have missed the current movement called “Reuseable Building Block development.” You know, as a kid, we could get blocks are build anything with them. We only had to stack them. Well, having a n-tier stack is very common, now, so stacking isn’t the issue. It is having blocks that are easy to stack. Some are calling it the open source movement, and while most reusable building blocks are open source, not all of them are. Many of the building blocks don’t have to be open source, but can simply be well-documented and work well.

With NuGet and Npm, building blocks are being created and published daily. The problem now is helping other developers recognize this movement. Changing our mindset from, “we can’t use if it wasn’t invented here,” to something more like, “this is our unique stack of building blocks for a unique problem and this stack was invented here.”

I have created a bunch of building blocks for C#. Check out my github account at https://github.com/rhyous. You will see a few reusable building blocks:

  • Rhyous.Collections – You know all those pesky extension methods your write for collections that are missing from the collections or from linq. I have a lot of them in here.
  • Rhyous.EasyCsv – A simple tool for working with csv files.
  • Rhyous.EasyXml – A simpel tool for working with Xml. (You might ask why I don’t have one for JSON, and that is because Newtonsoft.Json and fast.jsona already exist , so another one isn’t needed.)
  • Rhyous.EntityAnywhere – Wow, have a full rest api and only have to create the model class. Are you kidding, this is probably the coolest project for Web Service APIs since the REST pattern was introduced.Rhyous.SimplePluginLoader – Easily load plugins in your app.
  • Rhyous.SimpleArgs – Writing a tool with command line arguments? This tool allows you to configure your arguments in a model class and be done. It will output usage and force required parameters and allow for events when a parameter is set, etc.
  • Rhyous.StringLibrary – You know all those pesky extension methods you write for string manipulations missing from .NET Framework. They are in this library, along with a pluralization tool. Every heard of the The oft forgotten Middle Trim, well, it is in this library, too.
  • WPFSharp.Globalizer – The best localization library for WPF that exists, allowing you to change language and style (including left to right flow for certain languages) at runtime.

I actually have many more building blocks. Take a look.


DateTime Within Extension Method

I wrote and extension method to DateTime today. I want to call something simple to see if one date is within a two days of another date. There isn’t a within method. I set out to create one and this what I came up with.

Let me know what you think.

using System;
using System.Collections.Generic;

namespace Rhyous.WebFramework.Handlers.Extensions
{
    public enum DateTimeInterval
    {
        Miliseconds,
        Seconds,
        Minutes,
        Hours,
        Days,
        Weeks,
        Months,
        Years
    }

    internal class DateTimeIntervalActionDictionary : Dictionary<DateTimeInterval, Func<double, TimeSpan>>
    {
        #region Singleton

        private static readonly Lazy<DateTimeIntervalActionDictionary> Lazy = new Lazy<DateTimeIntervalActionDictionary>(() => new DateTimeIntervalActionDictionary());

        public static DateTimeIntervalActionDictionary Instance { get { return Lazy.Value; } }

        internal DateTimeIntervalActionDictionary()
        {
            Add(DateTimeInterval.Miliseconds, TimeSpan.FromMilliseconds);
            Add(DateTimeInterval.Seconds, TimeSpan.FromSeconds);
            Add(DateTimeInterval.Minutes, TimeSpan.FromMinutes);
            Add(DateTimeInterval.Hours, TimeSpan.FromHours);
            Add(DateTimeInterval.Days, TimeSpan.FromDays);
            Add(DateTimeInterval.Weeks, (double d) => { return TimeSpan.FromDays(d * 7); });
            Add(DateTimeInterval.Months, (double d) => { return TimeSpan.FromDays(d * 30); });
            Add(DateTimeInterval.Years, (double d) => { return TimeSpan.FromDays(d * 365); });
        }

        #endregion
    }

    public static class DateExtensions
    {
        public static bool IsWithin(this DateTime dateTime, double interval, DateTimeInterval intervalType, DateTime comparisonDateTime)
        {
            TimeSpan allowedDiff = DateTimeIntervalActionDictionary.Instance[intervalType].Invoke(interval);
            TimeSpan diff = dateTime - comparisonDateTime;
            return allowedDiff <= diff;
        }
    }
}

The problems with the Tiobe Index in regards to .Net

The Tiobe index is really missing one piece of information about .Net for its users. Java is #1. So users should use Java, right? Well, maybe not. Let’s talk about the problems with it before we move on.

I am going to make an argument that:

  1. Java is actually a more clear #1 than suggested.
  2. .Net is #2 behind Java, but not as far behind as the Tiobe index makes it appear.

Problem 1 – DotNet Framework is not listed as one a language

.Net has more languages writing against it than just one. That makes it appear less popular because the language is more fragmented. In fact, two of them are in the top 5 or 6. However, the fact that a dll compiled in either language can be consumed by either language is really not described here. I am not saying this should be on the same list of programming languages, but Tiobe should make it clear that the combined .Net languages show .Net as being used more heavily. Similary for Java, there are other languages that compile to the JVM. Perhaps there should be a page on compile target: What percent of languages compile to .Net’s Common Intermediary Language or compile to the Java Virtual Machine or to machine code or don’t compile at all?

As for intermediary languages, there are only two that stand out: Java and .Net. And Java is #1 but it only has 1 in the top 10. .Net has two in the top 10 and the combined languages are easily a rival to the combined JVM languages.

Look at the Tiobe index and add up the .Net Framework languages:

.Net Framework Languages

Language2019 Tiobe Rating
Visual Basic .Net5.795%
C#3.515%
F#0.206%
Total9.516%

Notice that combined, the number of the three main .Net languages is %9.516. That puts .Net in the #3 position behind Java, C, and C++.

Problem 2 – Some .Net languages are missing and may be lumped in other languages

What about Visual C++? Yes, you can write .Net code in C++. However, that language is completely missing from Tiobe. Or is it? Is all Visual C++ searches lumped in with C++? If so, shouldn’t Visual C++ be separate out from C++. What is the Tiobe raiting Visual C++ would get? That would be hard to guess. But it is a language has been around for almost two decades. Let’s assume that a certain percentage of C++ developers are actually doing Visual C++. Let’s say it is more than F# but quite a lot less than C#. Let’s just guess because unlike Tiobe, I don’t have have this data. Let’ say it was .750. Again, this is a wild guess. Perhaps Tiobe could comment on this, perhaps they couldn’t find data on it themselves.

.Net Framework Languages

Language2019 Tiobe Rating
Visual Basic .Net5.795%
C#3.515%
F#0.206%
F#0.206%
Total10.266%

As you see, .Net combined is clearly #3 just by combining the .Net languages. Well past Python, which in fact can be used to both code for .Net (IronPython) and for the Java JVM (Jython). What percent of python is used for that?

Here is a wikipedia list of .Net-based languages: https://en.wikipedia.org/wiki/List_of_CLI_languages.

Similarly, for Java, languages like Groovy up it’s score. Here is a wikipedia list of Jvm-based languages: https://en.wikipedia.org/wiki/List_of_JVM_languages.

Problem 3 – Visual Studio is Awesome

For all the problems and complaints of bloat, Visual Studio is the move feature rich IDE by such a long ways that I doubt any other IDE will ever catch up to it, except may Visual Studio Code, which, however, is just as much part of the Tiobe index problem as Visual Studio is.

The better the tool, the less web searching is needed. The breadth of the features in Visual Studio is staggering. The snippets, the Intellisense, the ability to browse and view and even decompile existing code means that .Net developers are not browsing the web as often as other languages. My first search always happens in Intellisense in Visual Studio, not on Google. The same features and tooling in other IDEs for other languages just isn’t there. Maybe Exclipse, but only with hundreds for plugins that most developers don’t know about.

After Visual Studio 2012 released, the need to search the web has decreased with every single release of Visual Studio. I am claiming that C#, which is the primary .Net Framework language microsoft code for in Visual Studio, is used far more than Visual Basic .Net. Tiobe has Visual Basic .Net at 5.795% and C# at 3.515%, but reality doesn’t match Tiobe’s statististics. C# is used far more than Visual Basic .Net.

I am making the hypothesis that as the primarily coded language in Visual Studio, C# would appear to go down in the Tiobe index since the release of Visual Studio 2012. Let’s test my hypothesis by looking at the Tiobe year-by-year chart for C#. Do we see the Tiobe index going down starting with the release of VS 2012?

After looking at the Tiobe index, I am upgrading my claim from a hypothesis to a theory.

Other .Net languages may not experience the same as C# as the tooling in .Net is primarily focussed around C#.

So the reality is that the Tiobe index is showing the data it can find from search engines, but the data for C# is just not there because a lot of the number of ways C# deflects the need to search.

I hypothesise that C# reached a peak Tiobe index of 8.763% and it’s usage has not actually gone down. Instead, it has gone up. However, the data doesn’t exist to prove it. Assuming the hypothesis is correct, and C# usage has gone up, then the rate it should be is closer to 9 or 10. That means the C# is probably #3 on it’s own.

If we adjust to take this problem into account, simply by using the 2012 index and not assuming the the usage rate has gone up, we see the following:

.Net Framework Languages

Language2019 Tiobe Rating
Visual Basic .Net5.795%
C#8.7%
F#0.206%
F#0.206%
Total17.606%

Now, I am not saying .Net is above Java with my hypothesized 17.505% adjusted rating. Java has other languages as well that compile to the JVM that would similarly raise it and it is still #1.

Problem 4 – Direct linking to or searching on Microsoft.com

Microsoft has done a great job with a lot of their documentation. Some of this could be attributed to Visual Studio as well. After clicking a link in Visual Studio, we are taking directly to a site like https://msdn.microsft.com where I do a lot of my language searches.

Also, Microsoft has built a community where customers can ask questions and get data.

Tiobe has a nice document that clearly states which search enginers did not qualify and what the reason they didn’t qualify was.

  • Microsoft.com: NO_COUNTERS

See: https://www.tiobe.com/tiobe-index/programming-languages-definition/

I would argue that a significant amount of searches for .Net languages are done primarily on Microsoft.com. I can only provide personal data. I often go directly to the source documentation on Microsoft.com and search on Microsoft’s site. And once I am there almost all further searches for .Net data occur there.

Microsoft has more C# developers in their company that many programming languages have world wide. Are they doing web searches through the list of qualified search engines?

Problem 5 – Better documentation

I hypothesize that the better the documentation, the less searching on the web is required. I also hypothesize that Microsoft is one of the best at providing documentation for it’s languages.

Because the documentation for .Net framework is so excellent, the question is usually answered in a single search instead of multiple searches that languages that are less well documented may require.

Problem 6 – Education

Colleges are teaching certain languages. Python and C++ are top languages taught in college. I would estimate that because of these, the languages primarily taught in college have far higher good search rates. Unfortunately, .Net languages, because of their former proprietary nature (which is no longer the case with the open source of .Net Core), were shunned by colleges.

It would be interesting to filter out searches by college students. Unfortunately, how would Tiobe know that a search came from a college student or not.

Problem 7 – Limited Verbage

Tiobe is only looking at certain words. The words that are being queried are:

  • C#: C#, C-Sharp, C Sharp, CSharp, CSharp.NET, C#.NET

Further, Tiobe says:

The ratings are calculated by counting hits of the most popular search engines. The search query that is used is

+"&lt;language&gt; programming"

This problem piggy backs on Problems 3, 4, and 5. Visual Studio is so awesome, that we know exactly what we are looking for. As a C# developer, I don’t type C# into my searches hardly at all. I type something like: WebApi, WCF, WPF, System.Net.Http or Entity Framework or LINQ, Xamarin, and many other seaches. Microsoft documentation is so clear and specific (Problem 5) that we can do highly specific searches without including the word C#.

Yes, other languages have libraries, too, but do other languages have Microsoft’s marketing department that brands libraries with trademarks and logos and makes that brand the goto phrase to search? I don’t think there is a single other programming language other than C# that does this. Microsoft is lowing the web searches for C# by their awesome marketing.

This is further evidence to explain why the actual usage of C# has gone way up while the Tiobe index has gone way down. Asp.Net, Ado.Net, Razor, WCF, WebApi, WPF, WF,  etc. What other language has logos and brands around specific parts of a language?

Problem 8 – Is C# always seen as C# in search engines

I don’t always add C# to my google searches. However, when I do, it is somehow changed to just C. The sharp symbol, #, is often removed. This recently stopped happening on Google, but it used to happen with every search in every browser. It was frustrating.

Has this been addressed in search engine stats?

Conclusion

The belief that C# is in the 3% range is an unfortunate error of circumstances. And .Net should be looked at is the second most important tool for a programmer, second only to Java, and above all other programming languages.

 

 


How to truncate all tables except one in MS SQL

It is well-known that a SQL guru can truncate all tables. This is not something anyone is going to do in production. For while coding or testing, this might be a common practice.

To truncate all tables, use the following sql:

EXEC sp_MSforeachtable 'TRUNCATE TABLE ?'

However, what if you wanted to exclude one table. For example, if using Entity Framework, one might want to keep the __MigrationHistory table untouched.

EXEC sp_MSForEachTable 'if ("?" NOT IN ''[dbo].[__MigrationHistory]'')
	TRUNCATE TABLE ?'

I finally figured it out by learning how to query the values:

EXEC sp_MSforeachtable 'if ("?" NOT IN ("[dbo].[__MigrationHistory]"))
         SELECT "?"'

It took me a good hour to figure this out. The key was to quote the ? variable.


How to Replace WCF Serialization with Json.Net without Wrapping and with UriTemplate Support

First, yes, I am still using WCF. Let’s move passed that concern to the real concern.

There are a dozen blog posts out there that explain how to replace the WCF serializer with Json.Net, however, every last one of them says that you must use wrapping and using parameters in the UriTemplate is not supported. https://blogs.msdn.microsoft.com/carlosfigueira/2011/05/02/wcf-extensibility-message-formatters

Just search the internet for WCF IDispatchMessageFormatter Json.Net. You will find all the articles that only work without UriTemplate support.

Well, I needed it to work with UriTemplate support without wrapping.

Turns out that this solution is far easier than I expected. I came accross this solution only after spending hours browsing Microsoft’s code.

So, to start, using parameters in the UriTemplate means that your Url or Url parameters will be specified in the UriTemplate and will have parameters.

For example, the Odata spec says that you should access an entity by Id with this a Url similar to this one:

https://somesite.tld/some/service/Users(1)

Then the method for the WCF service is like this:

[OperationContract]
[WebInvoke(Method = "GET", UriTemplate = "Users({{id}})", ResponseFormat = WebMessageFormat.Json)]
OdataObject Get(string id);
public virtual OdataObject Get(string id) 
{
    // code here
}

That is fine for a GET call as it doesn’t have a body. But what about a POST, Patch, or PUT call that does have a body? And what about now that the world is realizing that a GET sometimes needs a body?

Also, the examples provided a lot of code to figure out if it is a GET call and not even use the custom Json.Net IDispatchMessageFormatter. None of that code is necessary with this solution.

Let’s look at a PUT call that updates a single property of an entity as this has two parameters in the UriTemplate as well as a message body.

[OperationContract]
[WebInvoke(Method = "PUT", UriTemplate = "Users({{id}})/{{Property}} ResponseFormat = WebMessageFormat.Json)]
string UpdateProperty(string id, string property, string value);

public virtual OdataObject Put(string id, string property, string value)
{
// code here to update user
}

So there are two parameters in the UriTemplate, id and property, and the last parameter, value, is in the message body. Not a single solution for replacing the WCF serializer with Json.Net supports this scenario. Until now.

The goal is to deserialize the request with Json.Net. But the solutions provided break UriTemplate parameters in trying to reach the goal. The goal is not to replace the default WCF UriTemplate parameter work.

So now we can define a new problem: How do we deserialize the body with Json.Net but still have the UriTemplate parameters handled by WCF? The code to deserialize is the same code for both the parameters and the message body. We need to get the parameters without having WCF use the default deserializer for the message body.

Turns out, this problem is easy to solve.

Microsoft published their WCF code. Look at this code, lines 50-54: https://github.com/Microsoft/referencesource/blob/master/System.ServiceModel.Web/System/ServiceModel/Dispatcher/UriTemplateDispatchFormatter.cs

If you notice in line 50, WCF has the number of parameters from the Url and Url parameters and it subtracts that from the total list of parameters. If the message has not body, the subtraction result is always 0. If the message has a body, the subtraction always results in 1, telling WCF to deserialize the body. Well, I want WCF to do what it normally does with UriTempalte parameters, so if there is no body, use the WCF default stuff (which all the blogs say to do, but they do it the hard way).

Solution:

  1. In the custom EndPointBehavior, on the override, store the default IDispatchMessageFormater and pass it into the CustomDispatchMessageFormatter.
protected override IDispatchMessageFormatter GetReplyDispatchFormatter(OperationDescription operationDescription, ServiceEndpoint endpoint)
{
    var parentFormatter = base.GetReplyDispatchFormatter(operationDescription, endpoint);
    return new CustomDispatchMessageFormatter(this, operationDescription, parentFormatter);
}
  1. If there is no body, use the WCF default DeserializeRequest method. This vastly simplifies the code on the blogs out there. The other examples had masses of code upstream that just wasn’t needed when message.IsEmpty could be used.
  2. If there is a body but no parameters, just use Json.Net.
  3. If there is a body and there are UriTemplate parameters, create a temparary parameter array 1 size smaller and pass that into the default serializer.
  4. Copy the temp array to the orignal array.
  5. Then just deserialize with Json.Net.
public void DeserializeRequest(Message message, object[] parameters)
{
     if (message.IsEmpty || parameters.Length == 0)
         ParentFormatter.DeserializeRequest(message, parameters);
     else
         DeserializeMessageWithBody(message, parameters);
} 

private void DeserializeMessageWithBody(Message message, object[] parameters)
{
     if (parameters.Length > 1)
     {
         object[] tmpParams = new object[parameters.Length - 1];
         ParentFormatter.DeserializeRequest(message, tmpParams);
         tmpParams.CopyTo(parameters, 0);
     }
     if (message.GetWebContentFormat() != WebContentFormat.Raw)
         throw new InvalidOperationException("Incoming messages must have a body format of Raw.");
     byte[] rawBody = message.GetRawBody();
         var type = OperationDescription.Messages[0].Body.Parts.Last().Type;
         parameters[parameters.Length - 1] = RawBodyDeserializer.Deserialize(rawBody, type);
}

The deserializer becomes vastly simplified now that it isn’t trying to also handling wrapped parameters.

public class RawBodyDeserializer : IRawBodyDeserializer
{
    public object Deserialize(byte[] rawBody, Type type)
    { 
        using (MemoryStream ms = new MemoryStream(rawBody))
        using (StreamReader sr = new StreamReader(ms))
        {
            JsonSerializer serializer = new JsonSerializer();
            return serializer.Deserialize(sr, type);
        }
    }
}


Debugging Open Source dependencies included as NuGet packages

You may encounter the need to debug into a dependency that is NuGet package. If this NuGet package is proprietary, you need to contact the vendor. However, if the NuGet package is open source, perhaps on GitHub, then you have all the tools you need to debug into it. Debugging into an open source NuGet package is what this article is about.

We are going to use Rhyous.StringLibrary for this example. It is a simple open source project that provides some common extensions to strings. These are extensions that are often found duplicated in many different projects and sometimes multiple times in the same project.

Step 1 – Check out the Source

Check out the repo from GitHub. You need a Git client. If you don’t have one, you can use GitHub Desktop or the one that is included in the Windows install of Git.

  1. Check out the repository: 
    git fetch https://github.com/rhyous/StringLibrary.git 

Step 2 – Compare Assembly Versions

Some NuGet packages have different assembly versions than the code. I know, they shouldn’t be it happens. Make sure that the assembly version of the dll reference via the nuget package is the same as the assembly version in the downloaded source.

  1. In your project that references the NuGet package, expand the references, highlight the dll that came from the NuGet package, and note the assembly version.

  2. In the download NuGet package source project, check the Assembly version. This is different in .NET Framework and .Net Standard, but it should be easy to figure out in both.

Step 3 – Build the Solution

  1. Open the StringLibrary.sln in Visual Studio.
  2. Click Build.
  3. Go to the output directory and copy the dll and pdb files.

Step 4 – Copy the dll and pdb to your solution

If you go to your project that references the dll, find and highlight the reference and go to properties, you can see the full path to the referenced dll.

  1. Go to the solution folder of the project you are working on.
  2. Go to your project that references the dll.
  3. Under References, locate the dll.
  4. Go to Properties of the dll reference by pressing F4.
  5. Note the path to the dll.
  6. Go into the Packages directory.
  7. Find the folder for Rhyous.StringLibrary.
  8. Locate the dll folder. 
  9. Rename the existing rhyous.stringlibrary.dll to rhyous.stringlibrary.dll.orgininal.
  10. Copy the compiled dll and pdb from Step 2 to this folder.
  11. Clean and build your solution.

Step 5 – Add a breakpoint

You should now be able to step into the Rhyous.StringLibrary source from your project.

Note: If you have two instances of Visual Studio open, one for your project and one for Rhyous.StringLibrary project, you may think you put the break point in on the one with the SimplePluginLoader project. You don’t.  You don’t even need the Rhyous.StringLibrary project open, unless you need to make a change and recompile and recopy the dll and pdb to the packages directory. You simply need to step into the code in order to set a break point.

Note: One trick is to go to Tools | Options | Debugging | General and turn off Step over Property operators (Managed Only).

  1. Debug your poject.
  2. Put a break point on the call to Rhyous.StringLibrary you would like to step into.
  3. Step into the call to Rhyous.StringLibrary.
    Once you have stepped into the call, you should see it’s source.
    Continue stepping into or over or whatever you would like.
    Once you are in the source, you can add breakpoints.
    Note: If you know how to add a break point without first stepping into the project, let me know.

You should now be easily debugging your NuGet package.


Why long emails should be avoided as a Dev Lead

I keep failing to a avoid a common mistake as a leader. Sending long emails. It seems so easy. For whatever reason, as the dev lead, I cannot talk to a person face-to-face so I write a long email.

I could spend time talking about why email is bad, or I could show you how emails make people feel by showing you an email dialogue.

Why long emails should be avoided:

Dev Lead: I’m being a good mentor. Write a nice long email that will help the team grow on a subject A, that includes tons of info on Subject A, including its 5 benefits. I send this email to Dev1 and CC the other two members of my team.
Feels good about his leadership.

Dev 1: What the dev thinks: Uh, oh. The dev lead is having a hissy fit again. Looks like he is pissed at something I did. What a jerk.
Feels angry.

Dev 2: Oh no. I have no idea what the dev lead is talking about. Do I know my stuff? Googles and tries to learn what the dev lead is talking about.
Feels shamed.

Dev 3: Ugh! Why is he trying to teach me crap I already know.
Feels patronized.

Manager: Hey, the team didn’t appreciate that email.

Dev Lead: Feels like a poor leader.

Manager: Feels like he is losing his team.

Why it would have happened better face-to-face:

Dev Lead: Hey devs. I want to discuss subject A. What do you know about it already?

Dev 1: I’ve used it before

Dev 2: Stays silent.

Dev 3: I know all about Subject A.

Dev Lead: OK, Dev 3, tell us about subject A.

Dev 3: Gives four excellent points about subject A. One of them the dev lead didn’t know.

Dev Lead: Adds two points about subject A that Dev 3 didn’t know. Changes his list from 5 to 6 adding the one item Dev 3 did knew.
Feels impressed by Dev 3.

Dev 1: Feels growth.

Dev 2: Feels good to be introduced to a new subject.

Dev 3: Impressed that the dev lead let him educate the team.
Feels more respect for dev lead. Also notes that the Dev Lead knew things he didn’t and thinks he should listen more.

Manager: Feels good about the team.

It is all about the feelings, and there is something about face-to-face team interaction that leads to good feelings and something about long emails that always leads to bad feelings.

So, if you look at the face-to-face interaction, you can see that it all started with a short question. You could simulate this in a short email:

Dev Lead: Who can give me all the benefits of Subject A using only the knowledge in your head. No browser search allowed until after you respond.

Dev 1: Responds with the single most common benefit if subject A.

Dev 2: Doesn’t respond.

Dev 3: Responds with four items, one that the dev lead didn’t now about.

Dev Lead: Interesting. Here are the items that the team responded with. I added two more benefits for a total of 6. Should we use subject A to get those 6 benefits in our project?

Now imaging the response was crickets.

Dev Lead: Who can give me all the benefits of Subject A.

Dev 1: Doesn’t respond.

Dev 2: Doesn’t respond.

Dev 3: Responds with one item.

Dev Lead: Subject A is interesting and important to our project. I am going to create a quick training on it.

Dev Lead: Writes a doc on it and sends it to the team.

Team: Feels good to learn something new.

Manager: Feels like the team is running itself.

Tips

  1. Keep emails short.
  2. Use many short emails.
  3. Ask questions, preferable one-liners:
    1. Start by asking your team what they already know first.
    2. Ask follow-up questions second
  4. Compile responses into a bulleted list
    1. Add to the list if you can
    2. Ask questions about the list
  5. Thank the team

I am going to put these tips into practice next time I feel like sending a long email.


Code Review – Quick Reference

This is a simple check-list to make code reviews more valuable. Simply check these rules.

Download a single page word document: Code Review Cheat Sheet

Does the code follow the 10/100 Rule?

This is a quick check rule that isn’t extremely rigid. See the 10/100 rule of code

Method has less than 10 lines

Is the method that was added or changed 10 lines or less? (There are always exceptions such as Algorithms)

100

Is the class 100 lines or less?
Note: Model classes should have zero functions closer to 20 lines. Logic classes should be sub-100 lines.

Is the code S.O.L.I.D.

S.O.L.I.D. is an acronym. See this link: https://en.wikipedia.org/wiki/SOLID

Single Responsibility Principal

Does each class have a single responsibility? Does each method have a single responsibility?
Is this the only class that has this responsibility? (No duplicate code or D.R.Y. (Don’t Repeat Yourself)

Open/Closed Principle

Can you extend the functionality without modifying this code? Config, Plugins, event registration, etc.
Is there configuration is this code? If so, extract it. Configuration does not belong in code.

Liskov substitution principal

Is inheritance used? If so, does the child type cause issues the parent type wouldn’t cause?

Interface segregation principle

Does the code use interface-based design?
Are the interfaces small?
Are all parts of the interface implementations without throwing a NotImplementedException?

Dependency inversion principle

Does the code reference only interfaces and abstractions?
Note: If new code references concrete classes with complex methods, it is coded wrong.

Is the code Unit Tested

99% coverage

Is the Code 99% covered? Is code not covered marked with the ExcludeFromCodeCoverageAttribute?

Parameter Value Tests for methods with parameters

Are all parameter values that could cause different behavior covered?
See these links:
Unit testing with Parameter Value Coverage (PVC)
Parameter Value Coverage by type

Naming things

Typos

Are your names typo free?

Naming convention

Do your file names, class names, method names, variable names match existing naming conventions?

Big O

Do you have any glaringly obvious Big O problems? n or n2 vs when it could be constant or log n.
See: https://en.wikipedia.org/wiki/Big_O_notation


Parameter Value Coverage by Type

This article is a reference to Unit Testing with Parameter Value Coverage (PVC).

Primitive or Value Types

See this reference.

Short Name.NET ClassTypeWidthRange (bits)
byteByteUnsigned integer80 to 255
sbyteSByteSigned integer8-128 to 127
intInt32Signed integer32-2,147,483,648 to 2,147,483,647
uintUInt32Unsigned integer320 to 4294967295
shortInt16Signed integer16-32,768 to 32,767
ushortUInt16Unsigned integer160 to 65535
longInt64Signed integer64-9223372036854775808 to 9223372036854775807
ulongUInt64Unsigned integer640 to 18446744073709551615
floatSingleSingle-precision floating point type32-3.402823e38 to 3.402823e38
doubleDoubleDouble-precision floating point type64-1.79769313486232e308 to 1.79769313486232e308
charCharA single Unicode character16Unicode symbols used in text
boolBooleanLogical Boolean type8True or false
objectObjectBase type of all other types
stringStringA sequence of characters
decimalDecimalPrecise fractional or integral type that can represent decimal numbers with 29 significant digits128±1.0 × 10e−28 to ±7.9 × 10e28

byte

  1. Zero, 0, which is also byte.MinValue.
  2. A positive byte between 0 and 255.
  3. byte.MaxValue or 255

sbyte

  1. Zero, 0, which is also sbyte.MinValue.
  2. A positive sbyte between 0 and 127.
  3. A negative sbyte between -128 and 0.
  4. sbyte.MaxValue or 127
  5. sbyte.MinValue or -128

int

  1. A positive int between 0 and 2,147,483,647
  2. A negative int between -2,147,483,648 and 0
  3. Zero, 0
  4. int.MaxValue or 2,147,483,647
  5. int.MinValue or -2,147,483,648

uint

  1. Zero, 0, which is also uint .MinValue.
  2. A positive uint between 0 and 4,294,967,295.
  3. uint .MaxValue or 4,294,967,295

short

  1. A positive short between 0 and 32,767
  2. A negative short between -32,768 and 0
  3. Zero, 0
  4. short.MaxValue or 32,767
  5. short.MinValue or -32,768

ushort

  1. Zero, 0, which is also ushort .MinValue.
  2. A positive ushort, such as 1 through 65,535.
  3. ushort.MaxValue or 65,535

long

  1. A positive long between 0 and 9,223,372,036,854,775,807
  2. A negative long between -9,223,372,036,854,775,808 and 0
  3. Zero, 0
  4. long.MaxValue or 9,223,372,036,854,775,807
  5. long.MinValue or -9,223,372,036,854,775,808

ulong

  1. Zero, 0, which is also ulong.MinValue.
  2. A positive ulong between 0 and 18,446,744,073,709,551,615.
  3. ulong.MaxValue or 18,446,744,073,709,551,615

float

  1. A positive float between 0 and 3.402823E+38
    1. Note: This includes the float.Epsilon, but you could test double.Epsilon separately
  2. A negative float between -3.402823E+38 and 0
  3. Zero, 0.0
  4. float.MaxValue or 3.402823E+38
  5. float.MinValue or -3.402823E+38
  6. float.NaN
  7. float.PositiveInfinity
  8. float.NegativeInfinity

double

  1. A positive double between 0 and 1.79769313486232E+308
    1. Note: This includes the double.Epsilon, but you could test double.Epsilon separately
  2. A negative double between -1.79769313486232E+308 and 0
  3. Zero, 0.0
  4. double.MaxValue or 1.79769313486232E+308
  5. double.MinValue or -1.79769313486232E+308
  6. double.NaN
  7. double.PositiveInfinity
  8. double.NegativeInfinity

decimal

  1. A positive double between 0 and 79,228,162,514,264,337,593,543,950,335
  2. A negative double between -79,228,162,514,264,337,593,543,950,335 and 0
  3. Zero, 0
  4. double.MaxValue or 79,228,162,514,264,337,593,543,950,335
  5. double.MinValue or -79,228,162,514,264,337,593,543,950,335

string

  1. A null string
  2. An empty string, String.Empty, or “”
  3. One or more spaces ” “
  4. One or more tabs ” “
  5. A new line or Environment.NewLine
  6. A valid string.
  7. An invalid or junk string
  8. A string with many special characters: `~!@#$%^&*()_-+=,.<>/\?[]{}|
  9. Unicode characters such as Chinese
  10. An long string, over 256 characters, or even 1 million characters.
  11. (Occasionally) Case sensitivity. For example, for string comparisons, case sensitivity of a string is a required Parameter Value Coverage test.

Struct

  1. It is impossible to know. You need to define this per struct you create. For example, if your struct is a point with int values X and Y, then it is simply the int list above twice, once for X and once for Y.

Enum

  1. Any of the enums.
  2. You may need to do each of the enums, depending on how your enum is used.

Class or Reference Types

Class Object

Objects that are defined with the class keyword need the following tested:

  1. Null (This might go away or become optional in .NET 4.8)
  2. Instantiated
  3. Class properties can be primitive or value types, reference types, etc., and may need to be tested according to the type of the property.

Array, List, Dictionary, and other collections

Array, List, Collection

  1. Null
  2. Empty (instantiated with no items)
  3. Not empty but values of array are tested according to the value type. For example, an int[] would need to have the values tested in the ways listed above for int.
    1. Pay attention to how the code you are testing uses teh items in an array or list. If the items are objects, do you need to check if the list has a null item in the list?

Dictionary

  1. Null
  2. Empty (instantiated with no items)
  3. Key exists
  4. Key doesn’t exist
  5. Value at key is tested according to its value type. For example, a Dictionary<string, int> would need to have the values tested in the ways listed above for int.

Amazon Ec2 Instance Management with C#: Part 3 – Uploading and Importing a Key Pair

Before getting started

Skill Level: Beginner

Assumptions:

  1. You have completed Part 1 and 2 of Managing Amazon AWS with C# – EC2

Additional Information: I sometimes cover small sub-topics in a post. Along with AWS, you will also be exposed to:

  • .NET Core 2.0 – If you use .NET Framework, the steps will be slightly different, but as this is a beginner level tutorial, it should be simple.
  • Rhyous.SimpleArgs

Details

We may already have a key pair that we want to use, so we don’t want to create a new one. If that is the case, it can be uploaded.

Step 1 – Get key in the correct format

I used OpenSSL to do this.

  1. Download OpenSSL.
  2. Run this command:
    [sh]
    .\openssl.exe rsa -in c:\users\jbarneck\desktop\new.pem -pubou
    t -out c:\users\jbarneck\desktop\new.pub
    [sh]

Step 2 – Edit InstanceManager.cs file

We’ve created InstanceManager.cs in Part 1. Let’s edit it.

  1. Add a method to read the key file from disk and upload and import the key pair.
  2.         public static async Task ImportKeyPair(AmazonEC2Client ec2Client, string keyName, string keyFile)
            {
                var publicKey = File.ReadAllText(keyFile).Trim().RemoveFirstLine().RemoveLastLine();
                string publicKeyAsBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(publicKey));
                await ec2Client.ImportKeyPairAsync(new ImportKeyPairRequest(keyName, publicKeyAsBase64));
            }
    

Notice: We are calling RemoveFirstLine() and RemoveLastLine(); This is because key files have a header and footer that must be removed before sending up to AWS. We’ll do this in the next section.

Step 3 – Add methods RemoveFirstLine and RemoveLastLine

  1. By the time this publishes, you should only need to install Rhyous.String.Library. Otherwise, add this class file:
    namespace Rhyous.AmazonEc2InstanceManager
    {
        public static class StringExtensions
        {
            public static string RemoveFirstLine(this string text, char newLineChar = '\n')
            {
                if (string.IsNullOrEmpty(text))
                    return text;
                var i = text.IndexOf(newLineChar);            
                return i > 0 ? text.Substring(i + 1) : "";
            }
    
            public static string RemoveLastLine(this string text, char newLineChar = '\n')
            {
                var i = text.LastIndexOf(newLineChar);
                return (i > 0) ? text.Substring(0, i) : "";
            }
        }
    }
    

Step 4 – Configure command line Arguments.

We already have an Actions arguments to edit.

  1. Add DeleteKeyPair as a valid action to the Action argument.
  2. Add an additional argument for the key file.
                . . .
                new Argument
                {
                    Name = "Action",
                    ShortName = "a",
                    Description = "The action to run.",
                    Example = "{name}=default",
                    DefaultValue = "Default",
                    AllowedValues = new ObservableCollection<string>
                    {
                        "CreateKeyPair",
                        "DeleteKeyPair",
                        "ImportKeyPair"
                    },
                    IsRequired = true,
                    Action = (value) =>
                    {
                        Console.WriteLine(value);
                    }
                },
                . . .
                new Argument
                {
                    Name = "KeyFile",
                    ShortName = "pem",
                    Description = "The full path to a public key already created on your file system in PEM format. The full Private key won't work.",
                    Example = "{name}=c:\\My\\Path\\mykeyfile.pem",
                    CustomValidation = (value) => File.Exists(value),
                    Action = (value) =>
                    {
                        Console.WriteLine(value);
                    }
                }

You can now upload a public key file for use on the Amazon Cloud.

Next: Part 4

Return to: Managing Amazon AWS with C#


Interviewing: A developer should have a portfolio

I recently started interviewing for some contract positions, one a Software Developer in Test position and one a Senior Software Developer position. I am deeply surprised by the candidates complete lack of having an online presence. As I thought more about this, I realized that we have reached a point of maturity in the Software Developer roles that portfolios are now expected. I expected every candidate to have an active account on some open source source control repository, i.e. GitHub, and have a portfolio of code there.

Portfolio

When it is time to interview for a position as a developer, you should have a portfolio. The days of coding on a whiteboard should be over. Instead, an interviewer should be able to easily see your code and what you have or haven’t done.

There shouldn’t be a question about whether you can write code. Instead, the question should be: Based on the code we can see this individual has written, can they be a good fit for our team?

Proprietary code

Your portfolio cannot include proprietary code. End of discussion. If you are a developer and you can’t find code that isn’t proprietary to put into your portfolio, then what are you doing?

Open Source/Non-proprietary code

Even when working with proprietary code, there are many pieces of code that are so ubiquitous that they probably should be part of the .NET framework. You may use this code in every project you work on. Such as common string extensions in C#, or a more complete string check in javascript that checks if a string is undefined, null, empty, or whitespace.

Even better is if your code is not just stored, but it is available to be used, such as with NuGet, npm, Maven, or other code or library packaging tool. This shows that you not only have a portfolio, but you aren’t going to waste your hours rewriting code you have already written.

Where to keep your portfolio

I used to have mine on SourceForge but have since switched to GitHub. Visual Studio online is another option. Where you store your portfolio of your work does not matter as much as the fact that you do store it.

GitHub is where I chose. But you can easily Google for GitHub competitors if you want it to be elsewhere.

Brand your portfolio

My internet handle is Rhyous. Every piece of code I write that is part of my portfolio (Non-proprietary or not for someone else’s open source project) is now branded with Rhyous. Some of my older code may not be, but my new code is. For example, all my namespaces in C# now start with Rhyous. That makes it very easy to differentiate projects I have forked vs projects that I have developed.

What your portfolio must show

It must show:

  • You have skills as a developer.
  • SOLID principals.
  • An understanding of the importance of Unit Tests.
  • You refuse to waste time writing the same code twice.**
  • You can work on projects with other developers.
  • You bring more than just your skill set, you bring your ready-made building blocks.

** I find this to be so very important!

My Portfolio

My portfolio shows my skills as a developer. My code uses SOLID principals. Much of my code is Unit Tested.

I don’t like to write the same code twice. I for one, will never have to write a CSV parser in C# again as I have a good quality one: Rhyous.EasyCsv. Parsing arguments? I’ll never write an argument parser again because I have Rhyous.SimpleArgs. I will never have to write many of my string extensions again as I can easily grab them for any C# project from my Rhyous.StringLibrary NuGet package. Tired of using TryGetValue to get values from your dictionary? Try Rhyous.Collections and use the NullSafe dictionary, which still uses the TryGetValue but moves it inside the indexer so you don’t have to worry about it.

What a lack of portfolio shows

It has never clicked for you. What I mean by “It” is the idea of code reuse. The idea of object-oriented programming. The idea of standing on the shoulders of giants. The simple idea of using building blocks as a kid and make things from building blocks.

Go out and make your portfolio and fill it with building blocks so every time you build something new, you can build on foundation building blocks that are SOLID and just get better.


Amazon Ec2 Instance Management with C#: Part 2 – Deleting a Key Pair

Before getting started

Skill Level: Beginner

Assumptions:

  1. You have completed Part 1 of Managing Amazon AWS with C# – EC2

Additional Information: I sometimes cover small sub-topics in a post. Along with AWS, you will also be exposed to:

  • .NET Core 2.0 – If you use .NET Framework, the steps will be slightly different, but as this is a beginner level tutorial, it should be simple.
  • Rhyous.SimpleArgs

Step 1 – Edit InstanceManager.cs file

We’ve created InstanceManager.cs in Part 1. Let’s edit it.

  1. Add a method to delete the key pair.
  2.         public static async Task DeleteKeyPair(AmazonEC2Client ec2Client, string keyName)
            {
                await ec2Client.DeleteKeyPairAsync(new DeleteKeyPairRequest { KeyName = keyName });
            }
    

Step 5 – Configure command line Arguments.

We already have an Actions arguments to edit.

  1. Add DeleteKeyPair as a valid action to the Action argument.
  2.                 . . .
                    new Argument
                    {
                        Name = "Action",
                        ShortName = "a",
                        Description = "The action to run.",
                        Example = "{name}=default",
                        DefaultValue = "Default",
                        AllowedValues = new ObservableCollection<string>
                        {
                            "CreateKeyPair",
                            "DeleteKeyPair"
                        },
                        IsRequired = true,
                        Action = (value) =>
                        {
                            Console.WriteLine(value);
                        }
                    },
                    . . .
    

Next:

  • Part 3 – Uploading and Importing a Key Pair
  • Return to: Managing Amazon AWS with C#