Oct 02

Using nuget.config to control Nuget package reference sources.

Sometimes in software development you have to work around interesting obstacles.

I created a proof-of-concept code working with Cosmos DB. I needed to be able to run this code from my laptop as well as from a VM inside the Azure region. I copied all the code so I could tweak things as needed so the two client connections can behave differently. It was using the Azure Cosmos SDK v3 (3.0.0 to be precise).  https://github.com/Azure/azure-cosmos-dotnet-v3

The challenge came when I noticed the Cosmos Client Options did not contain a way to change the consistency level. That version of the library retrieves the consistency from the Cosmos account. This means there is not a way to choose a lower consistency level than defined in the account.

Thankfully, looking at the latest github link https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos/src/CosmosClientOptions.cs for the code shows that they have included a public property to set the consistency level. However, that code had not been released to a newer version of that Nuget package. I needed to be able to modify the consistency level on the connection and wasn’t able to wait for their next release.


I did clone the code to my laptop and was able to compile it. I thought, I could just change the reference in my code from Nuget to an assembly reference. But, the assembly has so many other dependencies and I didn’t want to chase down everyone of them.

So, going back to the Cosmos code, I had it create a Nuget package locally. This works great. With my PoC code I just made another Nuget source to that folder. That worked well, however, that doesn’t work with the code on the Linux VM. It can’t reference a folder on my computer. So, I did this instead.

Custom Nuget Package

I copied the Nuget package to the VM. It now resides in the bin/Debug folder. But, now I had to tell that code where to find the package. Nuget.config to the rescue.

I created a nuget.config file. The existence of the file tells the compiler to leverage it for where and how to retrieve Nuget packages. I added a file source to the bin/Debug folder. This entry was right after the reference to nuget.org. So, now it will attempt to find the packages at nuget.org first. Since it can’t find the locally made one, it looks at the next source in the list. That’s where it found my newly compiled Cosmos library that contains a way to adjust the consistency level.

         <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
         <add key="local" value="bin/Debug" />

This is the link to the Microsoft documentation on using nuget.config.

Not only can you control where Nuget packages are pulled from, but you can also add credentials. This is very useful for when pulling from a private Nuget source like in Azure Dev Ops.
In my example, I have a the nuget.org source as well as one called “local”. If that “local” source was actually in Azure Dev Ops I would add credentials like:

            <add key="username" value="some@email.com"/>
            <add key="password" value="..."/>

Notice that the element “local” matches the package source name above.

If using an unencrypted password:

            <add key="username" value="some@email.com"/>
            <add key="ClearTextPassword" value="someExamplePasswordHere!123"/>

Complete file example:

<?xml version="1.0" encoding="utf-8"?>
         <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
         <add key="local" value="bin/Debug" />
         <add key="privateAzureDevOpsSource" value="https://blahblah.com/foo/bar/example" />
            <add key="username" value="some@email.com"/>
            <add key="ClearTextPassword" value="someExamplePasswordHere!123"/>


The point here is that you can take code and make a private Nuget package then make accessible for your needs. The nuget.config file makes that possible.


After seeing that Microsoft did release a new version that included the consistency level option, I did revert to using their latest package version. My custom “fix” was meant to be temporary anyway.


Sep 26

Microservices — The Easy Way is the Wrong Way

I’ve had the pleasure to give my microservices presentation at the Kansas City Developers Conference (KCDC) https://www.kcdc.info/session/ses-84969
and also at at Tulsa .NET User Group.

On Oct 15th I’ll be presenting this again at DevUp.

The slide deck is now available on Slide Share.

Oct 12

Your First Azure Kubernetes Service Cluster – Using .NET Core MVC Website

In this post I’m going to show you the steps I do in my conference talk “Getting Started with Azure Kubernetes Service”.


To start with, you need to have a few things. If you don’t already have an Azure account look at getting some free resources at https://azure.microsoft.com/en-us/free/

I’m on a Windows 10 system with Bash enabled and Ubuntu installed. I like using WSL with Ubuntu because my target OS type for my microservices and websites run on Linux. It helps me stay sharp with Linux commands, etc.

I have .NET Core installed. Make sure you have the latest version. https://www.microsoft.com/net/download

I also have Docker for Windows so I can build the images. You’ll see this later in this post.
Other things you’ll need is the AZ CLI. Once that is installed you can execute the AZ command to install the Kubernetes CLI

az aks install-cli

Hopefully everything is installed and ready at this point. Now, from your terminal, you need to log into your Azure Subscription.
Execute the following command will give you a series of characters you use at the site https://microsoft.com/devicelogin.  If not already logged in you’ll be prompted to log into your Azure subscription. Once done your terminal will show some details of the subscription you just logged into.

az login

Build a Cluster

In my talk I mention a script that I use to create a cluster and an Azure Container Registry. I found the script some time ago and tweaked it a little. It uses AZ commands so you’ll need to make sure you’re logged into the subscription you want first. I recommend making a copy of the commands in the script, paste into a text editor, then modify to your needs. Start with the Environment Variables at the top. **Warning** It takes roughly 15 minutes and could be longer. Most of the time is taken waiting for the VM’s to be provisioned and come online. The script is located at https://gist.github.com/seanw122/e7b43b543f2a44be767739ce3866237f

Building the MVC Site

While the cluster is being created you can create the ASP.NET MVC site. On your computer create a new folder. The name of the folder will be the name of the project so chose wisely. In my example I have a simple name of proj1. Amazing name I know.
So, now I have my folder “D:/code/proj1”. In the address bar of the window click once. It should highlight the whole string. Type cmd and press Enter. You should see the Command Prompt window located at “D:/code/proj1”.
Now for some .NET Core. Type in the following command. It will create an ASP.NET MVC website using a generic template.

dotnet new mvc

After the site is created you’ll see several files. Find in the Controllers folder the file HomeController.cs. Edit that file and modify the method About:

public IActionResult About()
    ViewData["Message"] = "My About page. " + Environment.MachineName + ": " + Environment.OSVersion + ": " + DateTime.Now.ToString();
    return View();

This shows the Machine Name, OS Version, and the current date and time in UTC.
This is point out that the machine name is the name of the pod it is running on. The OS Version will prove that it’s running on Linux though it will show “UNIX” with some version. The date and time shows the page is running live.

Now create a new file name “Dockerfile” with no extension then put in the contents:

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base

FROM microsoft/dotnet:2.1-sdk AS build
COPY . .
RUN dotnet restore proj1.csproj
RUN dotnet build proj1.csproj -c Release -o /app

FROM build AS publish
RUN dotnet publish proj1.csproj -c Release -o /app

FROM base AS final
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "proj1.dll"]

Place this file in your proj1 folder. It needs to be there for the next command to work.


Docker Image

Now we’ll use Docker to build an image, tag to a different version, then push to our new Azure Container Registry.

docker build . -t myproj:v1

docker tag myproj:v1 {ACR Name}.azurecr.io/myproj:v1

docker push {ACR Name}.azurecr.io/myproj:v1

The first command builds the image. It will pull the image for .NET Core, ASP.NET Core then layer on our new MVC site.  The second command adds a tag to the image. There is only one final image. Now it has two tags. I do it this way so there is one for local and one specific to the ACR we’re pushing to.  Be sure to replace the {ACR Name} with the actual ACR Name you used in the script to create the cluster. The third command pushes the image to ACR specified in the tag.

In the case where the push tries but fails stating “Authorization Required”, use the following command to login. Be sure to use the name of the ACR you’re targeting without the curly braces.

az acr login --name {ACR Name}


Deploying to Cluster

By now the image should be successfully pushed to the Azure Container Registry. With you text editor create two new files and place the following contents.

First file save as <i>myproj-service.yml</i>

apiVersion: v1
kind: Service
  name: my-project-service
    app: my-project-server
  type: LoadBalancer
    - port: 8080
      targetPort: 80

And the second one as <i>myproj-deployment.yml</i>

apiVersion: extensions/v1beta1
kind: Deployment
  name: my-project-deployment
  replicas: 3
  minReadySeconds: 10
    type: RollingUpdate
      maxUnavailable: 1
      maxSurge: 1
        app: my-project-server
        - name: my-project
          image: {ACR Name}.azurecr.io/myproj:v1
            - containerPort: 80


Using the same terminal used to create the cluster we’re going to send these two files to the cluster to create a Deployment of 3 Pods that is accessible by a Service.  First you may need to navigate to the folder where you created these files. In my example I created the files in the same location as my MVC site, “D:/code/proj1”. To navigate to that location:

cd /mnt/d/code/proj1

Now execute this command to list the files and verify the two files we’re going to send are indeed in the folder.

ls -al

Now to create the Service. Why Service first?  In our case it will obtain a public IP address and that takes a few minutes. So, we should get that started now.

kubectl create -f myproj-service.yml

With the Service creation on the way we’ll now create the Deployment and the Pods.

kubectl create -f myproj-deployment.yml

A Deployment specifies information about the Pods to be created. Behind the scenes it creates a Replica Set. In our example we have it set to 3 replicas. The Scheduler works to maintain that number of active Pods.

To see the status of our Service execute:

kubectl get service -o wide

Look for your Service listed and the associated External IP address.  It may still be <i>Pending</i> in which case just wait a minute and try again.

Once you have an External IP address copy that address and we’ll put that in a browser’s address bar. But, note the port number. In this example it’s set to 8080. So, be sure to specify that in the browser as well.

Now your new site should appear!  Click on the link at the top for About. In the text that appears you should see the machine name, OS Version, and date & time.  The machine name is the name of the Pod that is hosting that request of the web server. The OS Version will say “Unix” plus some version numbers.



You created a new Kubernetes cluster on Azure and now hosting a new ASP.NET MVC website. There’s SO much more about AKS. To get a list of links I found useful go to my other post at http://seanwhitesell.com/2018/06/23/resources-for-getting-started-with-azure-kubernetes-service-with-net-core-prometheus-and-grafana

Jul 23

Latest Kubectl – Older Cluster

I’m currently working Azure Container Service “ACS” until Azure Kubernetes Service “AKS” is available in my production data center. Why? Because if I use an Azure service in a data center but have data in another data center then I have to pay data egress charges. Anytime data comes out then you have to pay for it. So for now I have my ACS v1.7.7 setup.

I just configured another laptop to connect to the cluster. I installed the AZ CLI then Kubectl CLI. After making sure things were authenticated to the cluster I tried a simple command.

kubectl get pods


to which I received the this error message:

No resources found. Error from server (NotAcceptable): unknown (get pods)


I ran the command on my other working system and things are fine. The cluster responded with the list of pods I expected to see. So, what’s the problem??

That’s when I remembered that Kubernetes 1.11 just became public. The Kubectl CLI I just installed is 1.11. Apparently, it has issues with a cluster from the 1.7.7 version. Which, by the way, is the latest version you can have in ACS!

*sigh* Ok, so thankfully I was able to downgrade the Kubectl CLI to a previous version.

sudo apt-get remove kubectl



sudo apt-get update -q && \
sudo apt-get install -qy kubectl=1.10.0-00


I then re-authenticated to the cluster.

az acs kubernetes get-credentials --resource-group {resource group name here} --name {name of azure container service here} --ssh-key-file {path to key file here}



kubectl get pods


returned the list of pods expected.

Jun 23

Resources for “Getting Started with Azure Kubernetes Service with .NET Core, Prometheus, and Grafana”


This is the best post I have found for getting started with containers and running them on Azure Container Services:
Run .NET Core 2 Docker images in Kubernetes using Azure Container Service and Azure Container Registry | Pascal Naber

When you have Kubernetes (K8s) up and running you’ll want to view the Kubernetes Dashboard. I have seen many tutorials on how to get it started and the link they mention never worked for me. So, I simply do

kubectl proxy

It starts a proxy connection between the cluster and your localhost.

The link to view the dashboard is http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/#!/overview?namespace=default.

But notice the version is V1. There will be a time when that link will need to be updated for a newer version.

Script to create a new resource group, Azure Kubernetes Service, and Azure Container Registry https://gist.github.com/seanw122/e7b43b543f2a44be767739ce3866237f


I have way too many so I’ll show one for now.
Pro C# 7


Play with Docker Classroom
eBook – Docker in Action 2nd Ed.
A Developer’s Guide To Docker – Docker Swarm | Okta Developer
Dockerize a .NET Core application | Docker Documentation
50+ Useful Docker Tools | Caylent
Interactive Browser Based Labs, Courses & Playgrounds | Katacoda
Running Docker containers on Bash on Windows – Jayway
Introduction to containers and Docker | Microsoft Docs


eBook – Kubernetes in Action
kubernetes/autoscaler: Autoscaling components for Kubernetes
brendandburns/k8s-playbooks: Some ansible playbooks for managing my k8s cluster(s)
Web UI (Dashboard) | Kubernetes
Watching auto-recovery
Azure: “Kubernetes the Easy Way” Managed Kubernetes on Azure AKS | E101 – YouTube
Kubernetes Co Founder Brendan Burns Orchestration is Becoming a Commodity – YouTube
Scaling Docker Containers using Kubernetes and Azure Container Service – Ben Hall – YouTube
Workloads – Kubernetes Dashboard
Ben Hall’s Blog – I don’t know darling, I’m doing my work
az acs kubernetes | Microsoft Docs
Introducing Play with Kubernetes – Docker Blog
Kubernetes Security: from Image Hygiene to Network Policies // Speaker Deck
Overview – Kubernetes Dashboard

Azure Container Services

Azure Region Availability
Azure Container Service – How to change your public key | Azure Container Service | Channel 9
Building Microservices with AKS and VSTS – Part 1 – Azure Development Community
Building Microservices with AKS and VSTS – Part 2 – Azure Development Community
Building Microservices with AKS and VSTS – Part 3 – Azure Development Community
A Closer Look at Microsoft Azure’s Managed Kubernetes Service – The New Stack
Introducing AKS (managed Kubernetes) and Azure Container Registry improvements | Blog | Microsoft Azure
SSH into Azure Container Service (AKS) cluster nodes | Microsoft Docs
Frequently asked questions for Azure Container Service | Microsoft Docs
Your very own private Docker registry for Kubernetes cluster on Azure (ACR)
Service principal for Azure Kubernetes cluster | Microsoft Docs
SSH keys on Windows for Kubernetes with Azure Container Service (ACS) | Pascal Naber
Setting up a Kubernetes cluster with Azure Container Service: Terraform, Azure Resource Manager, CLI
Manage Azure Kubernetes cluster with web UI | Microsoft Docs
Manage Azure Container Services cluster with web UI | Microsoft Docs


How to Setup Prometheus Monitoring On Kubernetes Cluster [Tutorial]
A monitoring solution for Docker hosts, containers and containerized services


Monitor Azure services and applications using Grafana | Microsoft Docs

Jun 23

Dockerizing an Existing App

I have been playing with Docker for a bit now and have always started play apps with Docker enabled. I decided to Dockerize an existing app. Ok, so, Right-click on project, select Add, then Docker Support. Ok great, there’s my additional project in the solution for Docker-Compose. I then decide to start debugger with Docker environment as primary. It fails to build. That’s strange. So, I do a Clean and Rebuild. Same issue. Docker-compose is complaining about a npm package that is not even in my project but in another app altogether! Sheesh!

When you Dockerize your app, select that project and then Show All Files. What I found is that the Docker-Compose files are in the parent folder of my application. It now sees and attempts to use ALL projects in the sub-folders from that point.

Simple solution; copy the solution file, Docker-compose files, and my application folder to a new parent folder. Now it only has my application(s) and Docker-Compose.

Aug 05

Speaker Confession

@geekygirlsarah helped start a trend called #speakerconfessions. I just submitted mine.

In 2016 at Tulsa TechFest I gave two talks back to back right after lunch. I was congested at lunch so I took an antihistamine and drank a bottle of water. During my talks I always drink water. So,during my first talk I drank another bottle of water. Then, in between sessions, I went to the bathroom. Well, 20 minutes into my 2nd talk I had to leave to pee again!

Aug 05

KCDC 2017 – Intro to OAuth2

On Aug 5th I had the pleasure of giving my Intro to OAuth 2 talk at KCDC 2017, {speaker}. As promised I’m including here the related information. The talk was a general overview of what OAuth is and an example of an “Authorization Flow”. In upcoming posts I’ll go over OAuth in bit more detail.

The talk was from the perspective of the Application and of the User. This post does not go over how to be an OAuth provider.

To get started creating your own application with Google:
1. Create the application
2. Google OAuth Details
3. Google Scopes

Beginning Tutorial

Recommended Blogs:
Aaron Parecki
Digital Ocean

Recommended Books:
OAuth 2 in Action
Mastering OAuth 2

My slide deck:
OAuth 2 Slide Deck

Code samples ASAP

Feb 27

Do it Right Now or Do it Right, Now?

“Do you solemnly swear to tell the truth, the whole truth, and nothing but the truth, so help you God?” Those can be scary words. When you hear those words you’re likely on the witness stand. The real problem is when it is YOU that is being sued. Incompetence on your part has lead to a bug. A bug that in turn lost millions of dollars for your employer or a client. Or far worse, that bug lead to the death of an occupant of a vehicle. “Do it right now” versus “do it right, now” having vastly different implications. As professionals it is our duty to apply the quality in our craftsmanship so the highest quality output is obtained. But, to do so comes at a cost. It takes a lot of extra time, effort, and forethought to have that quality. Generally deadlines are set for us and we feel that line cannot move.

Ok, the client set the deadline. It is a nearly impossible deadline. You and your team work furiously day and night to reach that deadline. In the end there are of course bugs. Processes and/or requirements have changed during the development cycle. Not enough checks and balances were done along the way. And finally a sub-par solution is presented to the client. Who’s responsible for the defects? Who’s responsible for the money and/or people hurt over time? Why was the client not told what they require could not be obtained by that deadline?

“Your honor I was merely obeying orders.” Following orders does not always save your tail. Ultimately we programmers, we professionals, we so called experts have the right to say NO. Yes the client dictated the deadline. But, what if you lost that client and the project went to someone else? Then it is not you on the witness stand trying to defend your actions by blaming the orders given. Sometimes the best outcome for us it to lose a client or a job where the best quality was not possible to apply.

I believe it will happen one day when programming professionals will be required to have some form of certification much like nursing or a physician. It will include a state level board exam. It will cost thousands. And it will cause the price of development to sky-rocket. That is not a favorable outcome.

It is impossible for us programmers to know everything. But, that is not a requirement. We are, however, required to do our best. Sometimes telling our managers and/or clients NO is the best answer. Sometimes a little loss helps us keep our head up high in our efforts to strive for quality in ourselves and in our craftsmanship.

So, are you going to do it right now OR are you going to do it “right”, now?

Nov 30

Willingness to be Wrong

In user group meetings I sometimes ask questions of which I already know the answer. I do this because I know there are a few attendees that are either intimidated by crowds and/or may have a pier/coworker in the audience and they don’t want to appear as stupid. I don’t mind asking those questions on their behalf. However, there are also times when I’m wrong in my understandings or assumptions. Anymore I don’t mind being wrong for the sake of learning. It’s also helpful when I’m wrong that others learn at the same time.

The willingness to be wrong allows us to be humble and thus open to learn. If I continue to fight for my (wrong) assumptions then I’m not learning. I’m only making it harder on myself and anyone willing to help teach me. “Is this the hill you want to die on?” That’s a great question I need to be reminded of often. We grow more and faster when we put aside our misunderstandings, humble ourselves, and listen.

There’s a phrase in martial arts called “Empty the Cup”. In order for us to learn we first must set aside what we know (or think we know) to allow new knowledge to come in. It stops us from making snap judgments and jumping ahead. I know for myself it’s easier to teach a student willing to listen and simply do what is asked of them. I teach programming and also self-defense. I see the different types of students. Regardless of age it’s easy to identify the students who want to learn versus those who are required to attend. I’ve seen young students in self-defense more eager to learn new things. I’ve also seen adult students who couldn’t empty the cup and learn new programming lessons. Age does not mean you are wiser. It means you have had more opportunities to grow.

Though I’m a teacher I’m also a student. I too must be reminded to empty the cup, breathe, and learn. But it starts with a willingness to be wrong. Don’t allow yourself to believe being wrong is a bad thing. Learning from your mistakes and those of others is a great gift.


Older posts «

hublot replica | replica watches | cartier replica sale | breitling replica sale