How to build .NET Core 1.0 NuGet Packages with version suffix

The Problem

As you might have noticed Microsoft just recently published .NET Core 1.0 alongside Visual Studio 2017 etc (see: https://blogs.msdn.microsoft.com/dotnet/2017/03/07/announcing-net-core-tools-1-0/ ). With the update they also said good bye to the project.json project format and (re)introduced the .csproj based format. So an upgrade is in place.

Today I upgraded a larger solution (57+ projects) from the project.json based project format to the new (old) .csproj format. While that worked pretty well overall, I found one particular annoying problem: Packing a NuGet package for my libraries that uses a version suffix (i.e. the build number). This could looke like this in your NuGet browser: MyPackage.2.1.0-4711

While that worked perfectly fine with the prior versions of .NET Core tooling via ‚dotnet pack –version-suffix <mySuffix>‘, the new version fails to effectively do this. With the new tooling you get one NuGet package per project and that package will only include one DLL for this project. All referencens to other projects are defined in a .nuspec file within the package. So you have to put all your referenced libraries in separate NuGet packages and publish them in order for the dependant package to resolve them.

Now on my site we use a CI/CD pipeline based on Gitlab and build our packages with a suffix containing the pipeline ID from Gitlab. So it’s usually ‚<PackageName>.<SemVersion>-<PipelineID>. So for each new build all internal libraries are being tagged with that versionSuffix as well and are expected to be available by our published packages. However that started to fail once we switched to .NET Core 1.0 tooling and .csproj projects.

The reason is that the nuspec file which gets incorporated into the nupkg does not contain the versionSuffix when defining the dependencies required by the packaged library. So MyPackage.2.1.0-4711 is referencing AnotherPackage.2.1.0, whereas there only is AnotherPackage.2.1.0-4711 available in the private NuGet repository of ours.

Now why is that?

Explanation

It turns out that when restoring a project via ‚dotnet restore‘ a file called ‚project.assets.json‘ is created in the /obj subfolder of your project directory. That file includes everything that is required for the project to be build. Now when running ‚dotnet pack‘ this file is used to determine what needs to be put in your .nuspec file in the NuGet package. Since the restore only looked at the other locally referenced projects, it only added them with their direct version number (in this case „2.1.0“), becuase it obviously does not know about any Pipeline ID which has to be added as suffix.

Solution

To fix this you need to provide the suffix when restoring your project, so that it can be written into the project.assets.json file and be used by the ‚dotnet pack‘ command later on. To achive this you have to use the ‚dotnet msbuild‘ command with a specific set of parameters. In our case we used the following command instead of a simple ‚dotnet restore:

dotnet msbuild "/t:Restore" /p:VersionSuffix=$CI_PIPELINE_ID /p:Configuration=Release

Basically all dotnet commands are wrappers for calls to ‚msbuild‘ so „/t“ is a target definition and the other parameters are key:value pairs to set definable build options.

So when you now run ‚dotnet pack‘ the .nuspec file in your package will include all the correct references!

 

P.S.: Logo taken from https://github.com/campusMVP/dotnetCoreLogoPack

How To Run Kubernetes locally with minikube on Windows 10

Setup Kubectl to access Cluster

  1. Download kubectl client from here: https://github.com/eirslett/kubectl-windows/releases
  2. Put it in a place that you want to add to your path and add that place to your path
  3. Proceed with installing your local Kubernetes cluster in the next step. It will setup the kube-config so that you may use kubectl like normal from CMD

Setup Kubernetes Cluster

  1. Install Minikube by downloading the minikube-installer.exe from here: https://github.com/kubernetes/minikube/releases
  2. Execute the installer
  3. Execute the update_path.bat in the installation folder under C:\Program Files x86\Kubernetes\Minikube
  4. Create a HyperV Virtual Switch by openning up the HyperV Manager from the Start menu and right-clicking on your computers node in the left pane. Then select „Manage virtual switches“ and create a new virtual switch that goes by the name „minikube“ and is set to „internal only“. The result should look like so:
  5. Start Minikube by pointing it towards hyperv and the newly created hyperV switch: minikube start --vm-driver=hyperv --hyperv-virtual-switch=minikube
  6. Optionally: Provide the amount of CPUs and Memory you would like to give to the minikube VM with the –cpus and –memory flags. I.e.: minikube start --vm-driver=hyperv --hyperv-virtual-switch=minikube --cpus=4 --memory=4096

 

Solved: Outlook 2016 crashes after Windows 10 Anniversary Update

After the recent Windwos 10 Anniversary Update my Outlook started crashing upon start. So I investaged and found the .OST files to be the cause in my case. As it turns out you may delete them without problems as long as you don’t delete any .PST files. They are located under the following directory: C:\Users\<Username>\AppData\Local\Microsoft\Outlook.

Just delete the <yourEMailAdress>.ost files and try to restart Outlook again. This solved the issue for me.

OpenNebula Virtual Router with open vSwitch

I just finished setting up open vSwitch as my VLAN isolation technology. So by now I have isolated networks and am able to group VMs into these networks. Of course I now want to be able to access certain services inside of those isolated networks. That’s where OpenNebula’s Virtual Router Appliance comes into play. 


For basic setup and usage the OpenNebula Documentation is quite sufficient: http://docs.opennebula.org/4.8/administration/networking/router.html


So I set up my VR (Virtual Router) to feature 2 NICs. One of which I attached to my casual public network, while I created a fresh Virtual Network for the private network. For the private network I used a completely different IP range like 10.10.0.0/24. This network also is isolated via OpenVSwitch.


The complete contextualization part of the template for the VR looks like this:

CONTEXT=[

DHCP=“NO“,

DNS=“<DNSAddress1><DNSAddress2>“,

FORWARDING=“10.10.0.2:80″,

NETWORK=“YES“,

NTP_SERVER=“<NTPServerAddress>“,

PRIVNET=“$NETWORK[TEMPLATE, NETWORK_ID=\“21\“]“,

PUBNET=“$NETWORK[TEMPLATE, NETWORK_ID=\“12\“]“,

RADVD=“NO“,

SEARCH=“local.domain“,

SSH_PUBLIC_KEY=“$USER[SSH_PUBLIC_KEY

]“,


However after setting up my router I encountered some strange behavior in that I was not able to route through to a vm hosted on another physical machine. 


For quite a while I thought it had to do with the VR (since isolation worked ever since before), but then I moved all VMs to the same host machine and voilá Routing worked! So I dug deeper and found out that open vSwitch produced VLAN-IDs that were not valid for my Cisco Switch! (Valid IDs Range from 1 – 1002 in normal mode and extend from 1006 – 4094 in extended mode) Somewhere I read that the IDs are generated from the VM ID + some HASH value. At that time I already had rather high VM IDs, so maybe together with the HASH-value my VLAN-IDs extended beyond Cisco’s valid range.


By setting my VLAN-ID manually in the Virtual Network I now have a stable working isolated Virtual Network wired up to the world through my Virtual Router. 


However I will need to further investigate into that VLAN-ID generation problem, since it’s a very useful feature.




OpenNebula and OpenVSwitch on Ubuntu Server 14.04 LTS

These are the steps to set up a minimal virtual network based on OpenVSwith for the use with OpenNebula:

    1. Install packages:# apt-get install openvswitch-switch
    2. create file in /etc/sudoers.d/openvswitch, so that the oneadmin user can create virtual network components using the ovs-vsctl and ovs-ofctl commands:
      
      
      %oneadmin ALL=(root) NOPASSWD: /usr/bin/ovs-vsctl
      %oneadmin ALL=(root) NOPASSWD: /usr/bin/ovs-ofctl
      
    3. Do not map a physical device to the bridge device in the standard Linux network layer!your minimal network configuration in /etc/network/interfaces should look similar to this:
      auto lo etho br0
      
      iface lo inet loopback
      
      iface eth0 inet manual
      
      iface br0 inet dhcp
      

      This example uses a single physical interface. Extend for the use with multiple physical and/or bonded devices. But do no assign a physical device to the bridge. This is accomplished by OpenVirtualsSwitch in the next step.

 

  1. Configured the bridge, using the openvswitch commands: (example for the NIC eth0)
    
    
    # ovs-vsctl add-br br0
    # ovs-vsctl add-port br0 eth0
    
    
  2. Re-register your hosts with OpenVSwitch as network solution
  3. Re-create your virtual networks with ovswitch as type and decive via VLAN = YES/NO whether you want Isolation or not.Edit:(sage) Some clarification and reordering.

Automatically assign an IP address to a newly created linux VM guest

So you’ve created your all new VM guest, uploaded it’s image, attached it to a template and started your VM. OpenNebula tells you that the VM has an IP address from your virtual network, but sadly you neither can access the VM via this address, nor is the interface in your linux host configured with the settings from your virtual network.

The solution for this problem is quite easy, install the package one-context:

  1. Download the correct version matching your Linux guest OS’s package manager and your OpenNebula version from here: http://dev.opennebula.org/projects/opennebula/files
  2. Install the package in your guest VM
  3. Reboot your guest VM and voila your network is configured

Pro-Tip: Do this before you upload your VM image to OpenNebula

Resolving network issues in OpenNebula / KVM

As we started out running the first VMs with OpenNebula / KVM on Ubuntu Server, we encountered strange network lags. These lags showed up as unresponsive SSH sessions or lagging RDP sessions on Windows hosts.

Finally I found out how to resolve this (at least for us). The solution is to use the RedHat VirtI/O drivers for the network cards.

In OpenNebula, when creating a new template, key in „virtio“ in the model field of your network settings:

This enables the virtio driver for the VMs instantiated from that template.

To enbale the driver in Linux / Windows hosts, you need to do different things:

Linux

Easy, you’re done 😉 No really, the driver is automatically activated

Windows

  1. Get the latest Windows virtI/O drivers from here: http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers
  2. Create a new Image from the .iso file located in the downloaded zip from above. Select type CD-ROM.
  3. Attach this image to your template or your VM
  4. RDP into your VM
  5. Check your System Manager in System Settings, their should be an Ethernet device without proper driver (yellow quotation mark attached). Select this device and select to update the driver. When asked select „Choose manually from computer“ and browse to the appropriate folder (64bit or 32bit) on the CD-Rom drive in your file explorer.
  6. Install the driver
  7. (Optional) You may want to repeat these steps in case you have other devices with yellow marks on them.

Installing UbuntuServer, KVM and OpenNebula on Mac Pro Cylinder

Here’s the How-To for installing the full stack on a Mac Pro Cylinder (2013, latest):

Install Ubuntu on Mac Pro Cylinder

Pre-Requisites: A bootable USB stick or CD-Rom with UbuntuServer 14.04 on it

  1. Boot into OS X
  2. Use the Disk Utility to decrease the size of your main partition (Here’s how: OS X Daily). We kept 40 GB for OS X, but that’s not a necessity
  3. Download rEFInd from here: http://sourceforge.net/projects/refind/files/0.8.3/refind-bin-0.8.3.zip/download
  4. Unzip in OS X and run install.sh from terminal
  5. Plug in your bootable USB stick
  6. Reboot the Mac
  7. From the rEFInd menu, select to boot from your USB stick
  8. Select Install Ubuntu
  9. When asked manually partition your HDD to feature a swap drive and a main drive.
  10. When asked which packages to install, select „OpenSSH Server“ and „Virtual Machine Host“
  11. Finish installation
  12. Reboot and select to boot into Recovery mode
  13. From the root console install fglrx graphics drivers by issueing the command „apt-get install fglrx“
  14. Reboot into regular Ubuntu. The system should now start up as expected

Install OpenNebula on Mac Pro Cylinder

Now you’ve already got KVM and libvirt installed with your Ubuntu Server distribution. The next step is to install OpenNebula and to set it up for libvirt and KVM:

Actually OpenNebula’s documentation features an excellent guide on how to set one host up to serve the OpenNebula frontend and to act as a VM host at the same time. Here it is: http://docs.opennebula.org/4.8/design_and_installation/quick_starts/qs_ubuntu_kvm.html

Hallo Welt!

Willkommen zur deutschen Version von WordPress. Dies ist der erste Beitrag. Du kannst ihn bearbeiten oder löschen. Um Spam zu vermeiden, geh doch gleich mal in den Pluginbereich und aktiviere die entsprechenden Plugins. So, und nun genug geschwafelt – jetzt nichts wie ran ans Bloggen!

© 2017 Mein Blog

Theme von Anders Norén↑ ↑