Draw.IO is a great alternative for Microsoft Visio. It allows me to quickly draw flowcharts and diagrams. I have been using this for a long time and the best part is that Draw.IO is open source on Github with web and offline (desktop) version. I personally use desktop version but what if I would like to host it in my own org or would like to work with the web instance instead of the desktop version.
The problem that I would be facing with the web version is that, it has some part of it written in Java. Fortunately, Draw.IO has an image hosted at docker hub form where I can pull the image and get it to work in no time. Instead of using the image on my local machine, I will try Web Apps for Containers in Azure. You may also know it as App Service in Azure.
To get started with Web Apps for Containers in Azure. Create a new resource in Azure portal and then search for Web Apps for Containers.
Give a name of your web app and fill in other information. For the OS
, I have selected Linux
. The next step is to select the official draw.io image from docker hub.
You can also select the image you want to host by providing the tag name. By default, it would pick up the latest even if you don’t provide it. You can check the different tags for draw.io and select the one you want to use. After selecting the right image with the tag, click Apply button to select the image for the container. The click Create button to create the web app with this container.
Note that there is no Standard pricing tier available when you select Web Apps for Container. It makes use of the Premium tier to run the containers in and as PaaS offering.
After the web app has been created successfully, you can navigate to the Container settings section under Settings section quickly. If you were quick, you will be able to see the logs in real-time and also see how the image is being pulled from docker hub and getting started. Below are the screenshots of the logs generated by my web app.
You can clerarly see in the logs that how the docker pull command was initiated and the command to start the container was executed. You can now navigate to the url of your web app to see the application in action.
When you visit the web app for the first time, it might take few extra seconds to load completely. This is a one time delay and after that the app will open up instantaneously.
Recently I moved from Windows to Ubuntu on one of my laptops and since then I was trying to look for the way to create a VHDs or some sort of image files that I can use as a container. So I looked up for the solution and found a working solution scattered though places. So here I am putting it in a blog post to have all it in one place.
First we need to create an image file (.img) by using the dd
command. The dd command is used to copy and convert the file. The reason it is not called cc
is because it is already being used by C compiler. Here goes the command.
$ dd if=/dev/zero of=cdata.img bs=1G count=5
Note: The command execution time depends on the size of the image you have opted out for.
The if
param stands for input file
to refer the input file. As if
, of
stands for output file
and this will create the .img
file with the name you specify. The third param bs
stands for block size
. This will let you set the size of the image file. The G
in the above command stands for GigaByte
(GB). You can set the image size by specifying the block size. In this case G
stands for GigaBytes (GB), K
for KiloBytes (KB), T
for TeraBytes (TB), P
for Petabytes (PB) and so on. Here I am creating a 5 blocks of 1 GB each for the image file name cdata
.
This command will create the .img file but you cannot mount this file as of yet. To do that you have to format the image so that it can have a file system. To do this make use of mkfs
command.
$ mkfs ext3 -F cdata.img
I am creating an ext3
filesystem on my image file. -F
is to force the operation. Here is the output of the command.
Now we can mount the image file by using the mount
command:
$ sudo mount -o loop cdata.img ~/Desktop/HDD/
With the success of the above command, you can see the image mounted on your desktop. You can unmount the image by clicking the eject or unmount button as shown in the below screenshot or you can execute the umount
command to do that.
Unmounting the image by using command line.
$ sudo umount ~/Desktop/HDD
I started to work on a project which is a combination of lot of intelligent APIs and Machine Learning stuff. One of the things I have to accomplish is to extract the text from the images that are being uploaded to the storage. To accomplish this part of the project I planned to use Microsoft Cognitive Service Computer Vision API. Here is the extract of it from my architecture diagram.
Let’s get started by provisioning a new Azure Function.
I named my Function App as quoteextractor
and selected the Hosting Plan as Consumption Plan
instead of App Service Plan
. If you choose Consumption Plan
you will be billed only what you use or whenever your function executes. On the other hand, if you choose App Service Plan you will be billed monthly based on the Service Plan you choose even if your function executes only few times a month or not even executed at all. I have selected the Location
as West US
because the Cognitive Service subscription I have has the endpoint for the API from West US
. I then also created a new Storage Account
. Click on Create
button to create the Function App.
After the Function App is created successfully, create a new function in the Function App quoteextractor
. Click on the Functions
on the left hand side and then click New Function
on the right side window to create a new function as shown in the below screenshot.
The idea is to trigger the function whenever a new image is added/uploaded to the blob storage. To filter down the list of templates, select C# as the Language and then select BlobTrigger - C#
from the template list. I have also change the name of the function to QuoteExtractor
. I have also changed the Path
parameter of Azure Blob Storage trigger to have quoteimages
. The quoteimages
is the name of the container where the function will bind itself and whenever a new image or item is added to the storage it will get triggered.
Now change the Storage account connection
which is basically a connection string for your storage account. To create a new connection click on the new
link and select the storage account you want your function to get associated with. The storage account I am selecting is the same one which got crated at the time of creating the Function App.
Once you select the storage account, you will be able to see the connection key in the Storage account connection
dropdown. If you want to view the full connection string then click the show value
link.
Click the Create
button to create the function. You will now be able to see your function name under the Functions
section. Expand your function and you will see three other segments named Integrate
, Manage
and Monitor
.
Click on Integrate
and under Triggers
update the Blob parameter name
from myBlob
to quoteImage
or whatever the name you feel like having. This name is important as this is the same parameter I will be using in my function. Click Save
to save the settings.
The storage account which is created at the time of creating the Function App, still does not have a container. Add a container with the name which is used in the path
of the Azure Blob Storage trigger
which is quotesimages
.
Make sure you select the Public access level
to Blob
. Click OK
to create a container in your storage account.
With this step, you should be done with all the configuration which are required for Azure Function to work properly. Now let’s get the Computer Vision API
subscription. Go to https://azure.microsoft.com/en-in/try/cognitive-services/ and under Vision
section click Get API Key
which is next to Computer Vision API
. Agree with all the terms and conditions and continue. Login with any of the preferred account to get the subscription ready. Once everything went well, you will see your subscription key and endpoint details.
The result which I am going to get as a response from the API is in JSON format. Either I can parse the JSON request in the function itself or I can save it directly in the CosmosDB or any other persistent storage. I am going to put all the response in CosmosDB in raw format. This is an optional step for you as the idea is to see how easy it is to use Cognitive Services Computer Vision API with Azure Functions. If you are skipping this step, the you have to tweak the code a bit so that you can see the response in the log window of your function.
Provision a new CosmosDB
and add a collection to it. By default there is a database named ToDoList
and with a collection called Items
that you can create soon after provisioning of the CosmosDB from the getting started page. If you want you can also create a new database and add a new collection which make more sense.
To create a new database and collection go to Data Explorer
and click on New Collection
. Enter the details and click OK
to create a database and a collection.
Now we have everything ready. let’s get started with the CODE!!.
Let’s start by creating a new function called SaveResponse
which takes the API response as one of the parameters and is of type string. In this method I am going to use the Azure DocumentDB library to communicate with CosmosDB. This means I have to add this library as a reference to my function. To add this library as a reference, navigate to the KUDU portal by adding scm in the URL of your function app like so: https://quoteextractor.scm.azurewebsites.net/. This will open up the KUDU
portal.
Go to Debug console
and click CMD
. Navigate to site/wwwroot/<Function Name>
. In the command window use the command mkdir
to create a folder name bin
.
mkdir bin
You can now see the bin
folder in the function app directory. Click the bin
folder and upload the lib(s).
In the very first line of the function use #r
directive to add the reference of external libraries or assemblies. Now you can use this library in the function with the help of using
directive.
#r "D:\home\site\wwwroot\QuoteExtractor\bin\Microsoft.Azure.Documents.Client.dll"
Right after adding the reference of the DocumentDB assembly, add namespaces with the help of using directive. In the later stage of the function, you will also need to make HTTP POST calls to the API endpoint and therefore make use of the System.Net.Http
namespace as well.
using Microsoft.Azure.Documents; using Microsoft.Azure.Documents.Client; using System.Net.Http; using System.Net.Http.Headers;
The code for the SaveResponse
function is very simple and just make use of the DocumentClient class to create a new document for the response we receive from the Vision API. Here is the complete code.
private static async Task<bool> SaveResponse(string APIResponse) { bool isSaved = false; const string EndpointUrl = "https://imgdocdb.documents.azure.com:443/"; const string PrimaryKey = "QCIHMgrcoGIuW1w3zqoBrs2C9EIhRnxQCrYhZOVNQweGi5CEn94sIQJOHK3NleFYDoFclB7DwhYATRJwEiUPag=="; try { DocumentClient client = new DocumentClient(new Uri(EndpointUrl), PrimaryKey); await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri("VisionAPIDB", "ImgRespCollcetion"), new {APIResponse}); isSaved = true; } catch(Exception) { //Do something useful here. } return isSaved; }
Notice the database and collection name in the function CreateDocumentAsync
where the CreateDocumentCollectionUri
will return the endpoint uri based of the database and collection name. The last parameter new {ApiResponse}
is the response that is received from the Vision API.
Create a new function and call it ExtractText
. This function will take Stream
object of the image that we can easily get from the Run
function. This method is responsible to get the stream object and convert it into byte array with the help of another method ConvertStreamToByteArray
and then send the byte array to Vision API endpoint to get the response. Once the response is successful from the API, I will save the response in CosmosDB. Here is the complete code for the function ExtractText
.
private static async Task<string> ExtractText(Stream quoteImage, TraceWriter log) { string APIResponse = string.Empty; string APIKEY = "b33f562505bd7cc4b37b5e44cb2d2a2b"; string Endpoint = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0/ocr"; HttpClient client = new HttpClient(); client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", APIKEY); Endpoint = Endpoint + "?language=unk&detectOrientation=true"; byte[] imgArray = ConvertStreamToByteArray(quoteImage); HttpResponseMessage response; try { using(ByteArrayContent content = new ByteArrayContent(imgArray)) { content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); response = await client.PostAsync(Endpoint, content); APIResponse = await response.Content.ReadAsStringAsync(); log.Info(APIResponse); //TODO: Perform check on the response and act accordingly. } } catch(Exception) { log.Error("Error occured"); } return APIResponse; }
There are few things in the above function which need out attention. You will get the subscription key and the endpoint when you register for the the Cognitive Services Computer Vision API. Notice the endpoint I am using also had ocr
in the end which is important as I want to read the text from the images I am uploading to the storage. The other parameters I am passing is the language
and detectOrientation
. Language
has the value as unk
which stands for unknown
which in turn tells the API to auto-detect the language and detectOrientation
checks text orientation in the image. The API respond in JSON format which this method returns back in the Run
function and this output becomes the input for SaveResponse
method. Here is the code for ConvertStreamtoByteArray
metod.
private static byte[] ConvertStreamToByteArray(Stream input) { byte[] buffer = new byte[16*1024]; using (MemoryStream ms = new MemoryStream()) { int read; while ((read = input.Read(buffer, 0, buffer.Length)) > 0) { ms.Write(buffer, 0, read); } return ms.ToArray(); } }Now comes the method where all the things will go into, the Run method. Here is how the `Run` method looks like.
public static async Task<bool> Run(Stream quoteImage, string name, TraceWriter log) { string response = await ExtractText(quoteImage, log); bool isSaved = await SaveResponse(response); return isSaved; }
I don’t think this method needs any kind of explanation. Let’s execute the function by adding a new image to the storage and see if we are able to see the log in the Logs window and a new document in the CosmosDB. Here is the screenshot of the Logs
window after the function gets triggered when I uploaded a new image to the storage.
In CosmosDB, I can see a new document added to the collection. To view the complete document navigate to Document Explorer
and check the results. This is how my view looks like:
To save you all a little time, here is the complete code.
#r "D:\home\site\wwwroot\QuoteExtractor\bin\Microsoft.Azure.Documents.Client.dll" using System.Net.Http.Headers; using System.Net.Http; using Microsoft.Azure.Documents; using Microsoft.Azure.Documents.Client; public static async Task<bool> Run(Stream quoteImage, string name, TraceWriter log) { string response = await ExtractText(quoteImage, log); bool isSaved = await SaveResponse(response); return isSaved; } private static async Task<string> ExtractText(Stream quoteImage, TraceWriter log) { string APIResponse = string.Empty; string APIKEY = "b33f562505bd7cc4b37b5e44cb2d2a2b"; string Endpoint = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0/ocr"; HttpClient client = new HttpClient(); client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", APIKEY); Endpoint = Endpoint + "?language=unk&detectOrientation=true"; byte[] imgArray = ConvertStreamToByteArray(quoteImage); HttpResponseMessage response; try { using(ByteArrayContent content = new ByteArrayContent(imgArray)) { content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); response = await client.PostAsync(Endpoint, content); APIResponse = await response.Content.ReadAsStringAsync(); //TODO: Perform check on the response and act accordingly. } } catch(Exception) { log.Error("Error occured"); } return APIResponse; } private static async Task<bool> SaveResponse(string APIResponse) { bool isSaved = false; const string EndpointUrl = "https://imgdocdb.documents.azure.com:443/"; const string PrimaryKey = "QCIHMgrcoGIuW1w3zqoBrs2C9EIhRnzZCrYhZOVNQweIi5CEn94sIQJOHK2NkeFYDoFcpB7DwhYATRJwEiUPbg=="; try { DocumentClient client = new DocumentClient(new Uri(EndpointUrl), PrimaryKey); await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri("VisionAPIDB", "ImgRespCollcetion"), new {APIResponse}); isSaved = true; } catch(Exception) { //Do something useful here. } return isSaved; } private static byte[] ConvertStreamToByteArray(Stream input) { byte[] buffer = new byte[16*1024]; using (MemoryStream ms = new MemoryStream()) { int read; while ((read = input.Read(buffer, 0, buffer.Length)) > 0) { ms.Write(buffer, 0, read); } return ms.ToArray(); } }
A video tutorial on how you can enable Multi Factor Authentication or MFA as it is called, for the users in the Azure Active Directory.
I run this blog on Azure as a Web App. I was planning to host it using a VM just for fun but I dropped the idea and choose Web Apps for my blog as a simple medium with less deployment effort and no server management at all. This blog post is mostly me documenting the steps to deploy or host the application on an Azure Linux Virtual Machine.
As this is just a demo, I will provisioned a small VM. You can use Azure-CLI to do that or you can just do it from the Azure Portal. I am using Azure Portal to do that. My choice of selection is an Ubuntu 16.04 LTS machine where LTS stands for L
ong T
erm S
upport which means that I am going to get the support for this version for a long time. To Create a new VM in Azure, click the Virtual Machines
icon on the left hand side navigation pane and then click Create Virtual machines
link in the center of the page. It will open up another pane where you can select the type of OS or type of select a VM from predefined VM templates. In my case I am selecting Ubuntu Server
and then the version as 16.04 LTS
. To create a VM click Create
button as shown in the screen shot below.
In the next pane, provide some basic details to create a VM. You can choose whatever you want as per your business needs. This is what my basic details looks like.
Please note that by default Super User or SU is disabled in Azure linux VM. If I want to execute any command as a root, all I have to do is to add
sudo
and use the same password for my user name on that VM.
Click OK
and proceed to select the VM size. By default, it will present you with some recommendations from the Azure itself. Click on View All
to view all VM size offerings available. I am selecting A1
for this example. When in production, you should choose very wisely based on your requirements.
I will leave the Settings
as it is.
Hit OK
to run the final validation. In the final pane, click OK
again to submit the deployment request to Azure to provision a new VM. It will take a while to create a VM and once it is done it will open up a VM home page where you can see the stats for the VM and do other administrative tasks on your newly created VM. It is a server distribution of Ubuntu and hence there will be no GUI for us to work with. SSH will be our only way in to the VM here. In the VM home page click the Connect
button and it will prompt you with the SSH command which you will be using to connect to this VM. Here is what I get after executing the VM command for the first time.
And this is it, we have an Ubuntu Server 16.04 LTS running the cloud. The next step is to install .NET Core on this VM.
Install .NET Core on Ubuntu or on any other Linux distribution is dead easy. The documentation itself is enough to get you started. Head over to this link where you can find the latest release of .NET Core for all Linux distributions. Select the right version of Ubuntu from the documentation before running the command. i have installed Ubuntu Server 16.04 and so the commands I will be executing are:
$ sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/dotnet-release/ xenial main" > /etc/apt/sources.list.d/dotnetdev.list' $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 417A0893 $ sudo apt-get update
After the update is completed, you are good to install the latest version of .NET with this simple command.
$ sudo apt-get install dotnet-dev-1.0.1
Once the installation is completed, create a new directory with any name of your choice. I give it a name testApp
and cd into the new directory like so.
$ mkdir testApp && cd testApp
Because I am going to execute the dotnet command for the first time, dotnet CLI utility will populate the local package cache to improve restore speed of the packages and to enable offline access. Inside the testApp directory create a new web app by executing the below command.
$ dotnet new mvc --auth None --framework netcoreapp1.1
Here I am telling the dotnet command utility to create a new
application by using the mvc
as a template with no authentication of any kind and targeted framework version is 1.1
. Now if I do ls
in the directory I will see the following contents.
Next thing to do is to restore the packages with restore command.
$ dotnet restore
After the restore is complete, let’s run the application and check if it runs properly or not. To check this we can run the command
$ dotnet run
Our web application is now running on the VM. Now let’s try to access it from the local browser with the web address like this.
http://<ip-address-of-the-vm>:5000
You will notice that you cannot reach the site you have it running on the VM. This is because of many reasons when working with VM. First the port 5000 is not accessible from outside. I have to create a firewall exception to make it accessible from outside. Even if I want to do that, it still not a good idea to have your site visitors to use 5000 as a port address along with the domain name. To overcome this issue and to have a more control over the web server, I now need to install a web server which acts as a reverse proxy for my .NET core application running on VM. One more thing I would like to highlight here is that when I execute the above command you can notice that the terminal window is now hanged up with the dotnet server running. If I terminate the command, it will also terminate my application. I will show you how to overcome this issue in the section where I am going to configure the Nginx server to serve as a reverse proxy for my application.
Internally .NET Core used Kestrel Server and is installed by default with the project. I cannot use it as a production ready server to host my application and therefore I am going to install Nginx web server and then configure it as a reverse proxy. To install Nginx I am just going to use a simple apt-get
command to install it and then start the service.
$ sudo apt-get install nginx $ sudo service nginx start
After the installation is done, I should be able to access the default web page delivered by Nginx server by typing the IP address of the VM. If you try this now, it will not work because of the VM Network Security Group
in Azure will not let you access it. By default only SSH connections are allowed to the VM. I want to have the HTTP as web applications work on HTTP. Under Network Interfaces
of the VM, click on Network Security Groups
. Here you can see the Inbound
and Outbound
security rules. Under Inbound Security Rules
, you can see that there is only one rule which is allowing you to have a SSH connection with the VM.
Under Settings
, click Inbound security rules
and then click Add
to add a new inbound rule.
I have selected the HTTP
from the Service
dropdown. This will automatically set the Protocol
, Port range
and Action
. Click OK
to set the rule.
Let’s try accessing the web page again and this time I should be able to see the default Nginx web page.
Time to configure the web server as a reverse proxy for my web application. Open the config file from the below location.
$ sudo nano /etc/nginx/sites-available/default
Add the below lines in the config file. The configurations I added are pretty self-explanatory. I recommend reading the complete configuration of Nginx here.
If you made the changes when the server was running then either restart the server to reload the settings or execute the below command to reload the settings without restarting the server.
$ sudo nginx -s reload
To publish the website, you also have to publish it with release configuration. With .NET Core, there are 2 ways of doing it. The .NET Core Guide states 2 methods of deploying the web application.
As per the documentation FDD and SCD are described as follows:
For an FDD, you deploy only your app and any third-party dependencies. You don’t have to deploy .NET Core, since your app will use the version of .NET Core that’s present on the target system. This is the default deployment model for .NET Core apps.
For a self-contained deployment, you deploy your app and any required third-party dependencies along with the version of .NET Core that you used to build the app. Creating an SCD doesn’t include the native dependencies of .NET Core on various platforms (for example, OpenSSL on macOS), so these must be present before the app runs.
Based on the documentation and as I have already installed .NET Core on the VM, I am going with FDD approach. As said, this is the default deployment model and therefore, I am going to zip the publish content and FTP to the VM.
In my last post, I have build the application where I have integrated the Azure AD authentication in .NET Core web application. I will use the same application and publish it with Release
configuration on my Windows machine and zip it. Here are the content of the release publish folder.
Now I can upload this to the VM. I have FTP already with me and therefore I am not configuring it on the VM. If you want to have a FTP server look no further than FileZilla Server. Execute the wget
command to download the zip file.
$ wget <ftp address>/ADApp.zip
By default, unzip command is not installed on the VM, install it by using the below command.
$ sudo apt-get install unzip
Extract the zip file in a directory name ADApp by executing the below command.
$ mkdir ADApp && unzip ~/websites/ADApp.zip -d ADApp
Change the working directory and try to run the application so ensure that the publish was successful.
$ cd ADApp && dotnet AddADAuth.dll
Note that the directory structure is bit different in the screen shot and the below screen shot. This is because I have copied all the files and folders from the PublishOuput folder into the ADApp folder root.
When you run the application for the first time, you will notice the application will not start just like that. .NET Core CLI utility will first configure the environment and then the application is being executed.
To execute the web application, run the below command. The dll file is the one which is being executed by the .NET Core runtime environment.
$ dotnet AddADAuth.dll
The application started successfully on local server on port 5000.
But there are problems when I run the application like this. One problem is that the command line is now occupied and I cannot do anything. If I terminate the process than the web application will also stop and I will not be able to browse the site. The second problem is that i have configured Nginx web server to power my web application. The internal .NET web server is not something I can use in the production environment. In short, I need this application to be running by .NET Core web server on port 5000 like so but it should not hang up or limit me to use the command line. To overcome these two problems, I can either use nohup command or I can use something more powerful like supervisor. For now I am sticking with nohup
as this is just a sample application. When I want to have more control I will switch to supervisor
. Here is the nohup
command in action. Giving me the terminal control back after I execute the dotnet
command.
You can notice that first I execute the ps
command to list down the processes running in the background. Then I execute the dotnet
command with nohup
and after that I execute the ps
command one more time to check the running processes. You can see that I have the dotnet
process running with process ID as 65164.
Just like at the time of configuring Nginx, I have checked whether I was able to access the default page of Nginx from my browser or not, I will now enter the IP address of my VM and this time I should be able to see my AD authentication application running.
The CSS is bit off in the web application and that is because of the inner URLs need to be configured with the different host name.
Now I have hosted a ASP.NET Core web application in a Linux VM in Azure. The web server I am using is Nginx
. Currently I am accessing the web application by IP address of the VM but now I want to access it with a recognizable name. I have a parked domain with GoDaddy
called theevilprogrammer.com. I want to associate this domain name to my web application and to do that I just need access to my VM and GoDaddy account.
Go to your domain registrar from where you can access the DNS settings of your parked domain. In my case it is GoDaddy. Select your domain and click on Manage DNS
.
I have an A record
in the DNS management of my domain. All I have to do here is to change the Value
to point to the IP address of the VM. Once this is done, click the Save
button.
Restart the Nginx server by running the below command. This will reload the configuration and apply those configurations which we have changed. Now I can check whether I am able to access my site with the domain name I have configured or not.
$ sudo service nginx restart
Open browser and type in the domain name which in my case is theevilprogrammer.com.
So this is how you can also run your .NET Core applications from the VM. Do keep this in mind that there are lot of Nginx configuration and Azure VM configurations you have to set before you go to production. This post is something which can get you started with your own application hosted on a Linux VM in the cloud but there are still lot of little and important things that you have to perform before going to production.