Why training is important!
Here is a funny comic to explain the importance of training, even when you think you already know everything.
Archive for the ‘Development’ Category.
Here is a funny comic to explain the importance of training, even when you think you already know everything.
Sometimes, when coding a web application in visual studio, you may want to have the project start in an InPrivate or Incognito window. Browsers, such as Chrome, Edge, Firefox, and others, have a special way to open them that is clean as in no cookies or history or logins and it isn’t tied to your normal browser session. This is called Private browsing. They each brand it a little differently, with Edge being InPrivate and Chrome using Incognito, but they are all private browsing.
Visual Studio can easily be configured to open the browser in private browsing.
Edge
Program: C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe
Arguments: -InPrivate
Friendly Name: Edge (InPrivate)
Chrome
Program: C:\Program Files\Google\Chrome\Application\chrome.exe
Arguments: -incognito
Friendly Name: Google Chrome (Incognito)
Happy coding!
Whether the term microservice for you indicates a technology, an architecture, or a buzzword to land you that next dev job, you need to be familiar with the term. You need to know why there is buzz around it, and you need to be able to code and deploy a microservice.
However, how successful are Microservices? A quick google search does not show promising results. One O’Reilly study found that less than 9% consider their microservices implementation a complete success. Most implementations report partial success at best. Why is this? Could it be that microservices are like any tool; great when used correctly, less than adequate when not. Remember, you can successfully pound a nail with a wrench, but a hammer is better and a nail gun is better than a hammer when coupled with power, a compressor, and enough space to use the tool. If you are building a tool that microservices isn’t suited for and you use microservices anyway because it is a buzzword, you are going to struggle and even if you don’t fail, you won’t have complete success.
Should you be looking to implement microservices? Do you have a monolith that could be broken up with microservices?
This really depended on your architecture and what you think you mean when you say microservice. There is a breakdown in the industry in a clear definition of what is a microservice.
Is there a better alternative to a microservice? That answer depends highly on what you are trying to do.
The initial idea of Microservices is based on first of the S.O.L.I.D. principles. When looking at any one microservice, it fulfills the S in solid. But what about the other letters? What about other principles beyond SOLID, such as the Don’t Repeat yourself (DRY) principle or Big O? Do microservices still hold up?
Let’s do an analysis of some of these concepts.
The S in S.O.L.I.D. literally means Single Responsibility, which is the very premise of a microservice. A microservice should have a single responsibility. A microservice excels at this. Or it is supposed to. Implementation is where things can get dicey. How good is your development team at limiting your microservice to a single responsibly? Did you create a microservice of a micromonolith?
Theoretical Score: 100% – complete success
Implementation Score: 50% to variable – half the developers I interview can’t even tell me what each letter in S.O.L.I.D. stand for, let alone hold their microservice to it.
The O in S.O.L.I.D. means Open for extension and closed for modification.
This principle is a problem for microservice architectures. The whole idea of microservices goes against this principle. In fact, Microservices are actually a 100% inverse of the recommendation made by the O in S.O.L.I.D. because microservices are open for modification and closed for extension.
If a microservice needs to be changed, you change it. Those changes automatically deploy.
Theoretical Score: 0% – complete failure
Implementation Score: 0% – complete failure
There terribly non-intuitive name aside, this principle means that if you substitute an parent object with a child, the code shouldn’t know or care that the child was used. You can now add substituting and interface with any concrete implementation and the code should just work regardless.
How do you do inheritance with a microservice? How do you substitute a microservice? You could create a child microservice that calls a microservice, but inheritance is just not a microservices concept.
Theoretical Score: N/A or 0% – complete failure
Implementation Score: N/A or 0% – complete failure
The I stands for Interfaces Segregation, which means you should have the minimal possible defined in any one interface. If more is needed, you should have multiple interfaces. A single microservice excels here as another principle idea of a microservice is that it has a defined interface for calling it and that it is a small (or micro) of an interface as possible. However, what if you need a slight change to an interface? Do you:
Theoretical Score: 100% – complete failure
Implementation Score: 50% to variable – there is no clearly defined path here, you have to trust your developers do make the right decision.
D means dependency inversion, which means you should depend upon abstracts and not concretes. Well, how do you do this if you are a microservice? What about when one microservice depends and three other microservices? And those other microservices are REST Apis? How do you depend on them abstractly?
This is a nightmare. The most difficult part of coding is depending upon external systems, their uptime.
Many developers and architecture will simply say that this is easy, just use queuing, messaging, a bus, but don’t make synchronous calls. If the system is down, it is down, regardless of whether it is synchronous or not. With synchronous calls, the caller can at least find out if a system is down immediately whereas with event-driven bus systems, this can be difficult to know. If one microservice is down, preventing a UI from displaying for a user, do you think a user cares whether you are synchronous or asynchronous? No. They care about clear messaging, which is harder to do asynchronously.
The efforts to solve this microservice conundrum often lead to an architecture that is far more difficult to maintain than the monolith. Remember, just because something is a monolith, doesn’t mean it was poorly architected.
Theoretical Score: 25% – extremely low success rate
Implementation Score: 25% to variable – there is no clear best practice here.
Microservices don’t even try with this one. Even the top architects balk at the importance of this with microservices. Almost invariable, they recommend that you DO repeat yourself. With the packaging abilities of this day and age (Maven, NuGet, npm, etc.) there is no excuse for this. Duplicating code is rarely a good idea.
There are exceptions to D.R.Y. For example, Unit Tests. I duplicate code all the time because a self-contained test is better than a hundred tests using the same setup code. If I need to change the setup, I risk breaking all the tests using that setup, whereas if I copy my setup, then each test better stands alone and can better isolate what it is trying to test.
Do Microservices fall into the same bucket as Unit Tests? Unit Tests usually find bugs, but don’t usually have bugs themselves the same way production code does. Microservices aren’t like Unit Tests at all as they are production code. If you copy code to 10 production microservices and find a bug, fixing it in all ten places is going to be a problem.
Theoretical Score: 0% – extremely low success rate
Implementation Score: 25% to variable – there is no clear best practice here. An implementor could balance how much code is copied vs contained in packaging systems.
Microservices can crash and burn when it comes to Big O. Remember, Big O is how many times an action has to be done for a given thing or set of things, N, where N is a variable representing the number of things. If there are two sets of things, you can use multiple variables N, M. And for three sets, N, M, K (see the pattern, just keep adding a variable for each set of things). The action per set of things is often processor or memory or disk space, but it is not limited to those. It can be anything: IP Addresses, docker images, pipelines, coding time, test time.
Big O (1) is the ultimate goal. If you can’t reach it, the next best is Big O (Log n). If you can’t reach that, then you are at least Big O (N), which isn’t good. That means that your technology does NOT scale. Worse, you could be Big O(N * M) or Big O (N^2), in which case your technology slows down exponentially and scaling is impossible without a change.
What is the Big O for N microservices in regards to source control? Big O (N)
What is the Big O for N microservices in regards to CI/CD pipelines: Big O (N).
What is the Big O for N microservices in regards to docker containers? Big O (N)
What is the Big O for the number of terraform files (or whatever config you use for your deployment to your cloud environment of choice) for N microservices that you have to maintain? Big O (N)
What is the Big O for N microservices in regards to IP Addresses? Big O (N) – however, you can get to Big O (1) if you configure a intermediary routing service, but now all you’ve done is create a Big O (N) configuration requirement.
What is the Big O for microservices in regards to coding time? Big O (N) – remember, the recommendation from even the leading experts is to ignore the DRY principle and repeated your code.
What is the Big O for a mesh of microservices that have to communicate to each other? Big O (N^2)
A couple of places microservices shine in Big O are:
The Big O of microservices is terrible and nobody is talking about it. Why have microservices gotten away with being Big O (N) for all this time? There are a couple of reasons:
So when should you use Microservices?
Well, if you consider a Microservice to be a cloud RESTful service, for cloud-delivered solutions, then microservices are probably going to have a higher success rate for you.
If you are installing on Desktop/Laptops/Mobile Devices, then microservices, as they are defined, are not the best solution. However, that doesn’t mean you should have a spaghetti code monolith. No, if you are installing an application (not just a link to a cloud website) then please, keep your monolith, only instead of breaking it up into microservices on docker containers, look to follow S.O.L.I.D. principals, break it up.
Theoretical Score: 15% – unless we are talking about a global service, where, in those small instances, they are 100%.
Implementation Score: 10% to variable – An implementor could use shared CI/CD pipelines, terraform files with variables (but most are that mature yet). Some might use only 1 public IP, but they still need N private IPs.
The future is bright. As many of these Big O issues are solved, which will come with maturity, microservices will naturally become more attractive.
A Network Time Protocol service is a great example of one that should be a microservice. It has one responsibility and one responsibility only. We could have 1 instance of it for the whole world (notice that suddenly made this Microservice Big O (1), didn’t it?). However, distance is a problem, so the United States needs its own, Europe needs its own, and China needs its own. It doesn’t have to be separate code, just the same code deployed to multiple cloud regions.
Many services for cloud products can be single-responsibility shared services, which is why microservices target cloud products so well.
The ability to have a microservice auto-deploy additional instances of it, often in different regions, to support scaling.
Not all services are shared. Some services need to be custom per customer. Microservices are not good for these. Especially if it is a pack of services.
Microservices are best designed for cloud solutions or internal only integration services. If you sell software that a customer should install on-premise (on-premise means on one of their systems in their environments), microservices are not a good option.
Everything could be in the cloud but not everything should be in the cloud.
You don’t want customers to have to deploy 100 docker containers to install your software on-premise. You just don’t. That doesn’t mean you couldn’t have a single cohesive system that includes microservices all installed on the same server, but the point is, those microservices are by many definitions not microservices if they are on the same server. Instead, they become a cohesive but decoupled single system.
The definition of Dark Network means no access to the internet. That doesn’t mean these environments could have their own internal clouds, with microservices, but chances are, if they don’t have internet access, they won’t need to be accessed by a billion people and need to be elastic.
Like it or not, microservices architecture can degrade the UI experience. Why? Because microservices are usually asynchronous and event-driven. Microservices, especially asynchronous event-driven ones, often make the UI harder to code because you have to call service but you get no response. You then have to code the UI to go obtain the response from an event. This also increases debugging time. Some people say a synchronous microservice is not a microservice. If that is true, then all microservices make the UI harder to code and debug. If microservices make UI code harder, that is a significant con that every implementor should be aware of.
No matter who makes the claim that microservices are 100% decoupled, they are wrong if a UI requires that microservice. If Service X is required by a UI, the UI is coupled to it being up. It doesn’t matter if it is a microservice that fails gracefully or a monolith that crashes without grace. If a customer is in the UI and they can’t do something because a service is down, that service is a dependency, and the belief that changing a UI’s dependency to a microservice solves this is just false. If the UI doesn’t work, it doesn’t work. Just because the code itself isn’t coupled doesn’t mean a UI’s functionality isn’t tightly coupled to a dependent microservice’s existence and uptime.
Microservices are here to stay and are a great tool for the right uses. But they are not a swiss-army knife. They are best for delivering cloud solutions or taking processing off the client in desktop/mobile apps.
Your code should have a Single Responsibility and vice-versa a single responsibility should have a single set of code (If you have three pieces of code that all have a single responsibility but they all have the same single responsibility, you are not S.O.L.I.D.). Look at interfaces, dependency injection, and please look at plugins. Plugin-based technology gives you almost everything you get for microservices.
Microservices can be a great tool or the wrong tool. Chose to use it wisely.
Note: This is obviously a highly summarized blog article, so please feel free to share your opinion and nit-pick as that is a form of crowdsourcing and is how blog articles get better.
Do you remember when the first computer took up the size of a room? I predict that we will say something similar about the data center.
In the 2030s, we will say, Do you remember when a data center was the size of a building?
It won’t be long before we can buy a 1U (rack mount size) data center. How? We aren’t that far away. Let’s just combine a few technologies:
Also, have you noticed this pattern – as the client or on-premise device gets more powerful, more runs on the client.
Main Frame <————–> Dumb terminal
Web Server <————–> Desktop PC (Browser becomes Terminal)
Web Server <————–> Desktop PC (Browser runs code that used to run on the server)
The Cloud <————–> Mobile device
Data Center
The pattern is this: What is on the server, eventually moves to the terminal. And the terminal gets ever smaller.
Now, there is also a wave where hardware started in house, moved out into Hosting services, then moved back in-house when internal data centers became easy, then moved back out when cloud was large and difficult to manage.
Once cloud is easy and smaller, that wave will move back in-house.
Imagine that we have a micro server, a Rasberry PI type of device, only it has a quantum processor and is the size of a Micro SD. It has metal connectors and slides into a bus on a 1U server. The 1U server bus holds 100 x 200 of these small micro servers for a total of 20,000 servers in 1U of space. Each PI has 1 TB of space.
Now these are small and easy to host internally. A company can easily host one of them or put one in US East, US West, Europe, and Asia, and anywhere needed.
This is a cloud in a box.
git clone <path or url to repo>
git init
git fetch
git branch mybranch
git checkout mybranch
git checkout 0bf7e9a915a15be0bdd6b97e79642b76aa0bf3ff
Want to get your code from before one or more changes? Find the commit id and use it.
git checkout mybranch
You can’t do much more than look around, but it can be useful, especially after a major architecture change that broke one tiny thing and you need to know why.
git pull
git add filename
git mv sourcefile destinationfile
Note: You can move a directory or source file or destination file can include directories.
git branch -d mybranch
git status
git checkout path\to\file.ext
This makes the repository clean again.
Do a dry run first with -n.
git clean -n
Then do it for real with -f.
git clean -fxd
git diff
git merge myBranch
Take all upstream source files
git checkout --ours . git add .
Keep all local files
git checkout --theirs . git add .
Abort the merge
git merge --abort
Reset your local branch to head, but keep all the changes. Use this to undo a commit.
git reset HEAD^
This at first looks easy. But there is complexities, especially if you have already pushed.
git rebase master
If a merge conflict occurs, fix it and then run:
git rebase --continue
If you have already pushed, run this to push once rebase is complete.
git push --force-with-lease
This is a multistep process. The assumption is that you are in your feature branch.
Make sure you have no lingering changes and everything is committed before starting.
Branch name in example: FeatureA
git checkout master git pull git checkout -b FeatureA_2 git merge --squash FeatureA
Now if you want the branch named the same, you can delete FeatureA and rename FeatureA_2 to FeatureA.
git branch -d yourbranch
To force, just use a capital D.
git branch -d yourbranch
git branch -m newBranchName
If you are in master and want to rename a feature branch without checking it out:
git branch -m oldBranchName newBranchName
Often, when multiple developers are working on the same solution and adding new projects to it, git will conflict easily.
Instead of trying to merge the .sln, it is often much faster, especially if you have only added a project or two, to just take the current master’s .sln and re-add your projects to the sln.
So imagine you are working on branch FeatureA.
Note: Remember, where “ours” and “theirs” points to is opposite of where they point to on a merge.
git checkout master git pull git checkout FeatureA git rebase master git checkout --theirs /path/to/yourproj.sln git rebase --continue
You will then have to save your commit as the commit text will open. Remember to press “Esc + Shift + :” and then type wq! and hit enter.
Now, if your branch has many check-ns, you may have to repeat the process to keep the master (theirs) .sln file.
Once your rebase is completed, make your changes to your .sln and check them in.
git config --global core.editor "'C:/Program Files/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin"
If you haven’t paid attention to the development world, you might have missed the current movement called “Reuseable Building Block development.” You know, as a kid, we could get blocks are build anything with them. We only had to stack them. Well, having a n-tier stack is very common, now, so stacking isn’t the issue. It is having blocks that are easy to stack. Some are calling it the open source movement, and while most reusable building blocks are open source, not all of them are. Many of the building blocks don’t have to be open source, but can simply be well-documented and work well.
With NuGet and Npm, building blocks are being created and published daily. The problem now is helping other developers recognize this movement. Changing our mindset from, “we can’t use if it wasn’t invented here,” to something more like, “this is our unique stack of building blocks for a unique problem and this stack was invented here.”
I have created a bunch of building blocks for C#. Check out my github account at https://github.com/rhyous. You will see a few reusable building blocks:
I actually have many more building blocks. Take a look.
I wrote and extension method to DateTime today. I want to call something simple to see if one date is within a two days of another date. There isn’t a within method. I set out to create one and this what I came up with.
Let me know what you think.
using System; using System.Collections.Generic; namespace Rhyous.WebFramework.Handlers.Extensions { public enum DateTimeInterval { Miliseconds, Seconds, Minutes, Hours, Days, Weeks, Months, Years } internal class DateTimeIntervalActionDictionary : Dictionary<DateTimeInterval, Func<double, TimeSpan>> { #region Singleton private static readonly Lazy<DateTimeIntervalActionDictionary> Lazy = new Lazy<DateTimeIntervalActionDictionary>(() => new DateTimeIntervalActionDictionary()); public static DateTimeIntervalActionDictionary Instance { get { return Lazy.Value; } } internal DateTimeIntervalActionDictionary() { Add(DateTimeInterval.Miliseconds, TimeSpan.FromMilliseconds); Add(DateTimeInterval.Seconds, TimeSpan.FromSeconds); Add(DateTimeInterval.Minutes, TimeSpan.FromMinutes); Add(DateTimeInterval.Hours, TimeSpan.FromHours); Add(DateTimeInterval.Days, TimeSpan.FromDays); Add(DateTimeInterval.Weeks, (double d) => { return TimeSpan.FromDays(d * 7); }); Add(DateTimeInterval.Months, (double d) => { return TimeSpan.FromDays(d * 30); }); Add(DateTimeInterval.Years, (double d) => { return TimeSpan.FromDays(d * 365); }); } #endregion } public static class DateExtensions { public static bool IsWithin(this DateTime dateTime, double interval, DateTimeInterval intervalType, DateTime comparisonDateTime) { TimeSpan allowedDiff = DateTimeIntervalActionDictionary.Instance[intervalType].Invoke(interval); TimeSpan diff = dateTime - comparisonDateTime; return allowedDiff <= diff; } } }
The Tiobe index is really missing one piece of information about .Net for its users. Java is #1. So users should use Java, right? Well, maybe not. Let’s talk about the problems with it before we move on.
I am going to make an argument that:
.Net has more languages writing against it than just one. That makes it appear less popular because the language is more fragmented. In fact, two of them are in the top 5 or 6. However, the fact that a dll compiled in either language can be consumed by either language is really not described here. I am not saying this should be on the same list of programming languages, but Tiobe should make it clear that the combined .Net languages show .Net as being used more heavily. Similary for Java, there are other languages that compile to the JVM. Perhaps there should be a page on compile target: What percent of languages compile to .Net’s Common Intermediary Language or compile to the Java Virtual Machine or to machine code or don’t compile at all?
As for intermediary languages, there are only two that stand out: Java and .Net. And Java is #1 but it only has 1 in the top 10. .Net has two in the top 10 and the combined languages are easily a rival to the combined JVM languages.
Look at the Tiobe index and add up the .Net Framework languages:
.Net Framework Languages
Language | 2019 Tiobe Rating |
---|---|
Visual Basic .Net | 5.795% |
C# | 3.515% |
F# | 0.206% |
Total | 9.516% |
Notice that combined, the number of the three main .Net languages is %9.516. That puts .Net in the #3 position behind Java, C, and C++.
What about Visual C++? Yes, you can write .Net code in C++. However, that language is completely missing from Tiobe. Or is it? Is all Visual C++ searches lumped in with C++? If so, shouldn’t Visual C++ be separate out from C++. What is the Tiobe raiting Visual C++ would get? That would be hard to guess. But it is a language has been around for almost two decades. Let’s assume that a certain percentage of C++ developers are actually doing Visual C++. Let’s say it is more than F# but quite a lot less than C#. Let’s just guess because unlike Tiobe, I don’t have have this data. Let’ say it was .750. Again, this is a wild guess. Perhaps Tiobe could comment on this, perhaps they couldn’t find data on it themselves.
.Net Framework Languages
Language | 2019 Tiobe Rating |
---|---|
Visual Basic .Net | 5.795% |
C# | 3.515% |
F# | 0.206% |
F# | 0.206% |
Total | 10.266% |
As you see, .Net combined is clearly #3 just by combining the .Net languages. Well past Python, which in fact can be used to both code for .Net (IronPython) and for the Java JVM (Jython). What percent of python is used for that?
Here is a wikipedia list of .Net-based languages: https://en.wikipedia.org/wiki/List_of_CLI_languages.
Similarly, for Java, languages like Groovy up it’s score. Here is a wikipedia list of Jvm-based languages: https://en.wikipedia.org/wiki/List_of_JVM_languages.
For all the problems and complaints of bloat, Visual Studio is the move feature rich IDE by such a long ways that I doubt any other IDE will ever catch up to it, except may Visual Studio Code, which, however, is just as much part of the Tiobe index problem as Visual Studio is.
The better the tool, the less web searching is needed. The breadth of the features in Visual Studio is staggering. The snippets, the Intellisense, the ability to browse and view and even decompile existing code means that .Net developers are not browsing the web as often as other languages. My first search always happens in Intellisense in Visual Studio, not on Google. The same features and tooling in other IDEs for other languages just isn’t there. Maybe Exclipse, but only with hundreds for plugins that most developers don’t know about.
After Visual Studio 2012 released, the need to search the web has decreased with every single release of Visual Studio. I am claiming that C#, which is the primary .Net Framework language microsoft code for in Visual Studio, is used far more than Visual Basic .Net. Tiobe has Visual Basic .Net at 5.795% and C# at 3.515%, but reality doesn’t match Tiobe’s statististics. C# is used far more than Visual Basic .Net.
I am making the hypothesis that as the primarily coded language in Visual Studio, C# would appear to go down in the Tiobe index since the release of Visual Studio 2012. Let’s test my hypothesis by looking at the Tiobe year-by-year chart for C#. Do we see the Tiobe index going down starting with the release of VS 2012?
After looking at the Tiobe index, I am upgrading my claim from a hypothesis to a theory.
Other .Net languages may not experience the same as C# as the tooling in .Net is primarily focussed around C#.
So the reality is that the Tiobe index is showing the data it can find from search engines, but the data for C# is just not there because a lot of the number of ways C# deflects the need to search.
I hypothesise that C# reached a peak Tiobe index of 8.763% and it’s usage has not actually gone down. Instead, it has gone up. However, the data doesn’t exist to prove it. Assuming the hypothesis is correct, and C# usage has gone up, then the rate it should be is closer to 9 or 10. That means the C# is probably #3 on it’s own.
If we adjust to take this problem into account, simply by using the 2012 index and not assuming the the usage rate has gone up, we see the following:
.Net Framework Languages
Language | 2019 Tiobe Rating |
---|---|
Visual Basic .Net | 5.795% |
C# | 8.7% |
F# | 0.206% |
F# | 0.206% |
Total | 17.606% |
Now, I am not saying .Net is above Java with my hypothesized 17.505% adjusted rating. Java has other languages as well that compile to the JVM that would similarly raise it and it is still #1.
Microsoft has done a great job with a lot of their documentation. Some of this could be attributed to Visual Studio as well. After clicking a link in Visual Studio, we are taking directly to a site like https://msdn.microsft.com where I do a lot of my language searches.
Also, Microsoft has built a community where customers can ask questions and get data.
Tiobe has a nice document that clearly states which search enginers did not qualify and what the reason they didn’t qualify was.
See: https://www.tiobe.com/tiobe-index/programming-languages-definition/
I would argue that a significant amount of searches for .Net languages are done primarily on Microsoft.com. I can only provide personal data. I often go directly to the source documentation on Microsoft.com and search on Microsoft’s site. And once I am there almost all further searches for .Net data occur there.
Microsoft has more C# developers in their company that many programming languages have world wide. Are they doing web searches through the list of qualified search engines?
I hypothesize that the better the documentation, the less searching on the web is required. I also hypothesize that Microsoft is one of the best at providing documentation for it’s languages.
Because the documentation for .Net framework is so excellent, the question is usually answered in a single search instead of multiple searches that languages that are less well documented may require.
Colleges are teaching certain languages. Python and C++ are top languages taught in college. I would estimate that because of these, the languages primarily taught in college have far higher good search rates. Unfortunately, .Net languages, because of their former proprietary nature (which is no longer the case with the open source of .Net Core), were shunned by colleges.
It would be interesting to filter out searches by college students. Unfortunately, how would Tiobe know that a search came from a college student or not.
Tiobe is only looking at certain words. The words that are being queried are:
Further, Tiobe says:
The ratings are calculated by counting hits of the most popular search engines. The search query that is used is
+"<language> programming"
This problem piggy backs on Problems 3, 4, and 5. Visual Studio is so awesome, that we know exactly what we are looking for. As a C# developer, I don’t type C# into my searches hardly at all. I type something like: WebApi, WCF, WPF, System.Net.Http or Entity Framework or LINQ, Xamarin, and many other seaches. Microsoft documentation is so clear and specific (Problem 5) that we can do highly specific searches without including the word C#.
Yes, other languages have libraries, too, but do other languages have Microsoft’s marketing department that brands libraries with trademarks and logos and makes that brand the goto phrase to search? I don’t think there is a single other programming language other than C# that does this. Microsoft is lowing the web searches for C# by their awesome marketing.
This is further evidence to explain why the actual usage of C# has gone way up while the Tiobe index has gone way down. Asp.Net, Ado.Net, Razor, WCF, WebApi, WPF, WF, etc. What other language has logos and brands around specific parts of a language?
I don’t always add C# to my google searches. However, when I do, it is somehow changed to just C. The sharp symbol, #, is often removed. This recently stopped happening on Google, but it used to happen with every search in every browser. It was frustrating.
Has this been addressed in search engine stats?
The belief that C# is in the 3% range is an unfortunate error of circumstances. And .Net should be looked at is the second most important tool for a programmer, second only to Java, and above all other programming languages.
First, yes, I am still using WCF. Let’s move passed that concern to the real concern.
There are a dozen blog posts out there that explain how to replace the WCF serializer with Json.Net, however, every last one of them says that you must use wrapping and using parameters in the UriTemplate is not supported. https://blogs.msdn.microsoft.com/carlosfigueira/2011/05/02/wcf-extensibility-message-formatters
Just search the internet for WCF IDispatchMessageFormatter Json.Net. You will find all the articles that only work without UriTemplate support.
Well, I needed it to work with UriTemplate support without wrapping.
Turns out that this solution is far easier than I expected. I came accross this solution only after spending hours browsing Microsoft’s code.
So, to start, using parameters in the UriTemplate means that your Url or Url parameters will be specified in the UriTemplate and will have parameters.
For example, the Odata spec says that you should access an entity by Id with this a Url similar to this one:
https://somesite.tld/some/service/Users(1)
Then the method for the WCF service is like this:
[OperationContract] [WebInvoke(Method = "GET", UriTemplate = "Users({{id}})", ResponseFormat = WebMessageFormat.Json)] OdataObject Get(string id);
public virtual OdataObject Get(string id) { // code here }
That is fine for a GET call as it doesn’t have a body. But what about a POST, Patch, or PUT call that does have a body? And what about now that the world is realizing that a GET sometimes needs a body?
Also, the examples provided a lot of code to figure out if it is a GET call and not even use the custom Json.Net IDispatchMessageFormatter. None of that code is necessary with this solution.
Let’s look at a PUT call that updates a single property of an entity as this has two parameters in the UriTemplate as well as a message body.
[OperationContract] [WebInvoke(Method = "PUT", UriTemplate = "Users({{id}})/{{Property}} ResponseFormat = WebMessageFormat.Json)] string UpdateProperty(string id, string property, string value); public virtual OdataObject Put(string id, string property, string value) { // code here to update user }
So there are two parameters in the UriTemplate, id and property, and the last parameter, value, is in the message body. Not a single solution for replacing the WCF serializer with Json.Net supports this scenario. Until now.
The goal is to deserialize the request with Json.Net. But the solutions provided break UriTemplate parameters in trying to reach the goal. The goal is not to replace the default WCF UriTemplate parameter work.
So now we can define a new problem: How do we deserialize the body with Json.Net but still have the UriTemplate parameters handled by WCF? The code to deserialize is the same code for both the parameters and the message body. We need to get the parameters without having WCF use the default deserializer for the message body.
Turns out, this problem is easy to solve.
Microsoft published their WCF code. Look at this code, lines 50-54: https://github.com/Microsoft/referencesource/blob/master/System.ServiceModel.Web/System/ServiceModel/Dispatcher/UriTemplateDispatchFormatter.cs
If you notice in line 50, WCF has the number of parameters from the Url and Url parameters and it subtracts that from the total list of parameters. If the message has not body, the subtraction result is always 0. If the message has a body, the subtraction always results in 1, telling WCF to deserialize the body. Well, I want WCF to do what it normally does with UriTempalte parameters, so if there is no body, use the WCF default stuff (which all the blogs say to do, but they do it the hard way).
Solution:
protected override IDispatchMessageFormatter GetReplyDispatchFormatter(OperationDescription operationDescription, ServiceEndpoint endpoint) { var parentFormatter = base.GetReplyDispatchFormatter(operationDescription, endpoint); return new CustomDispatchMessageFormatter(this, operationDescription, parentFormatter); }
public void DeserializeRequest(Message message, object[] parameters) { if (message.IsEmpty || parameters.Length == 0) ParentFormatter.DeserializeRequest(message, parameters); else DeserializeMessageWithBody(message, parameters); } private void DeserializeMessageWithBody(Message message, object[] parameters) { if (parameters.Length > 1) { object[] tmpParams = new object[parameters.Length - 1]; ParentFormatter.DeserializeRequest(message, tmpParams); tmpParams.CopyTo(parameters, 0); } if (message.GetWebContentFormat() != WebContentFormat.Raw) throw new InvalidOperationException("Incoming messages must have a body format of Raw."); byte[] rawBody = message.GetRawBody(); var type = OperationDescription.Messages[0].Body.Parts.Last().Type; parameters[parameters.Length - 1] = RawBodyDeserializer.Deserialize(rawBody, type); }
The deserializer becomes vastly simplified now that it isn’t trying to also handling wrapped parameters.
public class RawBodyDeserializer : IRawBodyDeserializer { public object Deserialize(byte[] rawBody, Type type) { using (MemoryStream ms = new MemoryStream(rawBody)) using (StreamReader sr = new StreamReader(ms)) { JsonSerializer serializer = new JsonSerializer(); return serializer.Deserialize(sr, type); } } }
You may encounter the need to debug into a dependency that is NuGet package. If this NuGet package is proprietary, you need to contact the vendor. However, if the NuGet package is open source, perhaps on GitHub, then you have all the tools you need to debug into it. Debugging into an open source NuGet package is what this article is about.
We are going to use Rhyous.StringLibrary for this example. It is a simple open source project that provides some common extensions to strings. These are extensions that are often found duplicated in many different projects and sometimes multiple times in the same project.
Check out the repo from GitHub. You need a Git client. If you don’t have one, you can use GitHub Desktop or the one that is included in the Windows install of Git.
Some NuGet packages have different assembly versions than the code. I know, they shouldn’t be it happens. Make sure that the assembly version of the dll reference via the nuget package is the same as the assembly version in the downloaded source.
If you go to your project that references the dll, find and highlight the reference and go to properties, you can see the full path to the referenced dll.
You should now be able to step into the Rhyous.StringLibrary source from your project.
Note: If you have two instances of Visual Studio open, one for your project and one for Rhyous.StringLibrary project, you may think you put the break point in on the one with the SimplePluginLoader project. You don’t. You don’t even need the Rhyous.StringLibrary project open, unless you need to make a change and recompile and recopy the dll and pdb to the packages directory. You simply need to step into the code in order to set a break point.
Note: One trick is to go to Tools | Options | Debugging | General and turn off Step over Property operators (Managed Only).
You should now be easily debugging your NuGet package.
I keep failing to a avoid a common mistake as a leader. Sending long emails. It seems so easy. For whatever reason, as the dev lead, I cannot talk to a person face-to-face so I write a long email.
I could spend time talking about why email is bad, or I could show you how emails make people feel by showing you an email dialogue.
Why long emails should be avoided:
Dev Lead: I’m being a good mentor. Write a nice long email that will help the team grow on a subject A, that includes tons of info on Subject A, including its 5 benefits. I send this email to Dev1 and CC the other two members of my team.
Feels good about his leadership.
Dev 1: What the dev thinks: Uh, oh. The dev lead is having a hissy fit again. Looks like he is pissed at something I did. What a jerk.
Feels angry.
Dev 2: Oh no. I have no idea what the dev lead is talking about. Do I know my stuff? Googles and tries to learn what the dev lead is talking about.
Feels shamed.
Dev 3: Ugh! Why is he trying to teach me crap I already know.
Feels patronized.
Manager: Hey, the team didn’t appreciate that email.
Dev Lead: Feels like a poor leader.
Manager: Feels like he is losing his team.
Why it would have happened better face-to-face:
Dev Lead: Hey devs. I want to discuss subject A. What do you know about it already?
Dev 1: I’ve used it before
Dev 2: Stays silent.
Dev 3: I know all about Subject A.
Dev Lead: OK, Dev 3, tell us about subject A.
Dev 3: Gives four excellent points about subject A. One of them the dev lead didn’t know.
Dev Lead: Adds two points about subject A that Dev 3 didn’t know. Changes his list from 5 to 6 adding the one item Dev 3 did knew.
Feels impressed by Dev 3.
Dev 1: Feels growth.
Dev 2: Feels good to be introduced to a new subject.
Dev 3: Impressed that the dev lead let him educate the team.
Feels more respect for dev lead. Also notes that the Dev Lead knew things he didn’t and thinks he should listen more.
Manager: Feels good about the team.
It is all about the feelings, and there is something about face-to-face team interaction that leads to good feelings and something about long emails that always leads to bad feelings.
So, if you look at the face-to-face interaction, you can see that it all started with a short question. You could simulate this in a short email:
Dev Lead: Who can give me all the benefits of Subject A using only the knowledge in your head. No browser search allowed until after you respond.
Dev 1: Responds with the single most common benefit if subject A.
Dev 2: Doesn’t respond.
Dev 3: Responds with four items, one that the dev lead didn’t now about.
Dev Lead: Interesting. Here are the items that the team responded with. I added two more benefits for a total of 6. Should we use subject A to get those 6 benefits in our project?
Now imaging the response was crickets.
Dev Lead: Who can give me all the benefits of Subject A.
Dev 1: Doesn’t respond.
Dev 2: Doesn’t respond.
Dev 3: Responds with one item.
Dev Lead: Subject A is interesting and important to our project. I am going to create a quick training on it.
Dev Lead: Writes a doc on it and sends it to the team.
Team: Feels good to learn something new.
Manager: Feels like the team is running itself.
I am going to put these tips into practice next time I feel like sending a long email.
This is a simple check-list to make code reviews more valuable. Simply check these rules.
Download a single page word document: Code Review Cheat Sheet
This is a quick check rule that isn’t extremely rigid. See the 10/100 rule of code
Is the method that was added or changed 10 lines or less? (There are always exceptions such as Algorithms)
Is the class 100 lines or less?
Note: Model classes should have zero functions closer to 20 lines. Logic classes should be sub-100 lines.
S.O.L.I.D. is an acronym. See this link: https://en.wikipedia.org/wiki/SOLID
Does each class have a single responsibility? Does each method have a single responsibility?
Is this the only class that has this responsibility? (No duplicate code or D.R.Y. (Don’t Repeat Yourself)
Can you extend the functionality without modifying this code? Config, Plugins, event registration, etc.
Is there configuration is this code? If so, extract it. Configuration does not belong in code.
Is inheritance used? If so, does the child type cause issues the parent type wouldn’t cause?
Does the code use interface-based design?
Are the interfaces small?
Are all parts of the interface implementations without throwing a NotImplementedException?
Does the code reference only interfaces and abstractions?
Note: If new code references concrete classes with complex methods, it is coded wrong.
Is the Code 99% covered? Is code not covered marked with the ExcludeFromCodeCoverageAttribute?
Are all parameter values that could cause different behavior covered?
See these links:
Unit testing with Parameter Value Coverage (PVC)
Parameter Value Coverage by type
Are your names typo free?
Do your file names, class names, method names, variable names match existing naming conventions?
Do you have any glaringly obvious Big O problems? n or n2 vs when it could be constant or log n.
See: https://en.wikipedia.org/wiki/Big_O_notation
This article is a reference to Unit Testing with Parameter Value Coverage (PVC).
Short Name | .NET Class | Type | Width | Range (bits) |
---|---|---|---|---|
byte | Byte | Unsigned integer | 8 | 0 to 255 |
sbyte | SByte | Signed integer | 8 | -128 to 127 |
int | Int32 | Signed integer | 32 | -2,147,483,648 to 2,147,483,647 |
uint | UInt32 | Unsigned integer | 32 | 0 to 4294967295 |
short | Int16 | Signed integer | 16 | -32,768 to 32,767 |
ushort | UInt16 | Unsigned integer | 16 | 0 to 65535 |
long | Int64 | Signed integer | 64 | -9223372036854775808 to 9223372036854775807 |
ulong | UInt64 | Unsigned integer | 64 | 0 to 18446744073709551615 |
float | Single | Single-precision floating point type | 32 | -3.402823e38 to 3.402823e38 |
double | Double | Double-precision floating point type | 64 | -1.79769313486232e308 to 1.79769313486232e308 |
char | Char | A single Unicode character | 16 | Unicode symbols used in text |
bool | Boolean | Logical Boolean type | 8 | True or false |
object | Object | Base type of all other types | ||
string | String | A sequence of characters | ||
decimal | Decimal | Precise fractional or integral type that can represent decimal numbers with 29 significant digits | 128 | ±1.0 × 10e−28 to ±7.9 × 10e28 |
Objects that are defined with the class keyword need the following tested:
Skill Level: Beginner
Assumptions:
Additional Information: I sometimes cover small sub-topics in a post. Along with AWS, you will also be exposed to:
We may already have a key pair that we want to use, so we don’t want to create a new one. If that is the case, it can be uploaded.
I used OpenSSL to do this.
We’ve created InstanceManager.cs in Part 1. Let’s edit it.
public static async Task ImportKeyPair(AmazonEC2Client ec2Client, string keyName, string keyFile) { var publicKey = File.ReadAllText(keyFile).Trim().RemoveFirstLine().RemoveLastLine(); string publicKeyAsBase64 = Convert.ToBase64String(Encoding.UTF8.GetBytes(publicKey)); await ec2Client.ImportKeyPairAsync(new ImportKeyPairRequest(keyName, publicKeyAsBase64)); }
Notice: We are calling RemoveFirstLine() and RemoveLastLine(); This is because key files have a header and footer that must be removed before sending up to AWS. We’ll do this in the next section.
namespace Rhyous.AmazonEc2InstanceManager { public static class StringExtensions { public static string RemoveFirstLine(this string text, char newLineChar = '\n') { if (string.IsNullOrEmpty(text)) return text; var i = text.IndexOf(newLineChar); return i > 0 ? text.Substring(i + 1) : ""; } public static string RemoveLastLine(this string text, char newLineChar = '\n') { var i = text.LastIndexOf(newLineChar); return (i > 0) ? text.Substring(0, i) : ""; } } }
We already have an Actions arguments to edit.
. . . new Argument { Name = "Action", ShortName = "a", Description = "The action to run.", Example = "{name}=default", DefaultValue = "Default", AllowedValues = new ObservableCollection<string> { "CreateKeyPair", "DeleteKeyPair", "ImportKeyPair" }, IsRequired = true, Action = (value) => { Console.WriteLine(value); } }, . . . new Argument { Name = "KeyFile", ShortName = "pem", Description = "The full path to a public key already created on your file system in PEM format. The full Private key won't work.", Example = "{name}=c:\\My\\Path\\mykeyfile.pem", CustomValidation = (value) => File.Exists(value), Action = (value) => { Console.WriteLine(value); } }
You can now upload a public key file for use on the Amazon Cloud.
Next: Part 4
Return to: Managing Amazon AWS with C#
I recently started interviewing for some contract positions, one a Software Developer in Test position and one a Senior Software Developer position. I am deeply surprised by the candidates complete lack of having an online presence. As I thought more about this, I realized that we have reached a point of maturity in the Software Developer roles that portfolios are now expected. I expected every candidate to have an active account on some open source source control repository, i.e. GitHub, and have a portfolio of code there.
When it is time to interview for a position as a developer, you should have a portfolio. The days of coding on a whiteboard should be over. Instead, an interviewer should be able to easily see your code and what you have or haven’t done.
There shouldn’t be a question about whether you can write code. Instead, the question should be: Based on the code we can see this individual has written, can they be a good fit for our team?
Your portfolio cannot include proprietary code. End of discussion. If you are a developer and you can’t find code that isn’t proprietary to put into your portfolio, then what are you doing?
Even when working with proprietary code, there are many pieces of code that are so ubiquitous that they probably should be part of the .NET framework. You may use this code in every project you work on. Such as common string extensions in C#, or a more complete string check in javascript that checks if a string is undefined, null, empty, or whitespace.
Even better is if your code is not just stored, but it is available to be used, such as with NuGet, npm, Maven, or other code or library packaging tool. This shows that you not only have a portfolio, but you aren’t going to waste your hours rewriting code you have already written.
I used to have mine on SourceForge but have since switched to GitHub. Visual Studio online is another option. Where you store your portfolio of your work does not matter as much as the fact that you do store it.
GitHub is where I chose. But you can easily Google for GitHub competitors if you want it to be elsewhere.
My internet handle is Rhyous. Every piece of code I write that is part of my portfolio (Non-proprietary or not for someone else’s open source project) is now branded with Rhyous. Some of my older code may not be, but my new code is. For example, all my namespaces in C# now start with Rhyous. That makes it very easy to differentiate projects I have forked vs projects that I have developed.
It must show:
** I find this to be so very important!
My portfolio shows my skills as a developer. My code uses SOLID principals. Much of my code is Unit Tested.
I don’t like to write the same code twice. I for one, will never have to write a CSV parser in C# again as I have a good quality one: Rhyous.EasyCsv. Parsing arguments? I’ll never write an argument parser again because I have Rhyous.SimpleArgs. I will never have to write many of my string extensions again as I can easily grab them for any C# project from my Rhyous.StringLibrary NuGet package. Tired of using TryGetValue to get values from your dictionary? Try Rhyous.Collections and use the NullSafe dictionary, which still uses the TryGetValue but moves it inside the indexer so you don’t have to worry about it.
It has never clicked for you. What I mean by “It” is the idea of code reuse. The idea of object-oriented programming. The idea of standing on the shoulders of giants. The simple idea of using building blocks as a kid and make things from building blocks.
Go out and make your portfolio and fill it with building blocks so every time you build something new, you can build on foundation building blocks that are SOLID and just get better.