Building Real Time Applications With Firebase And AngularJS

17. May 2014 23:25

API Cloud Web 

Building real-time applications with SignalR is easy. But building real-time applications with JavaScript will be a problem if you are not aware of Firebase. I first heard of it when I was learning AngularJS. The official AngularJS site has the example where you can add, edit and delete the detials in real-time. This example uses Firebase as the storage and updates the UI in real time. 

The data is stored in JSON format and accessing it using jQuery makes it very easy. Firebase API is pretty powerful and lets you get started in minutes. To get started with Firebase, you have to first create a new account with Firebase. It's a paid service but for now you can opt for a developer account. As soon as your account is created you will be redirected to your dashboard where you can create a new application or use the one which is there by default from Firebase. Take the note of the URL ending with firebaseIO.com, we will be needing it later on at the time of implementing.

Clicking on Manage App button will take you to the details page of the application. Where you can view/edit/import/export the data in the JSON format, authentication from Facebook, Twitter, Google etc, simulate read/write operation and more. You can see the sample data in JSON format I have been testing in the below screenshot. If you already have a JSON file, you can directly import it from the dashboard using the Import JSON feature.

For this example I am using jQuery and AngularJS. If you wish you can just stick to jQuery. Create a new HTML page and add Firebase JS references in the head section. I am using AngularJS so I am going to add AngularJS reference as well.

<script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.2.min.js" type="text/javascript"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.15/angular.min.js"></script>
<script src="https://cdn.firebase.com/js/client/1.0.11/firebase.js"></script>
<script src="https://cdn.firebase.com/libs/angularfire/0.7.1/angularfire.min.js"></script>

You get the first three references but the last reference is the AngularFire which is a quick way to work with Firebase applications with AngularJS. This is AngularJS code with firebase dependency.

var feedApp = angular.module('feedDataApp', ['firebase']);

feedApp.controller('feedListController', function ($scope, $firebase) {
    var fbURL = new Firebase("https://scorching-fire-0000.firebaseio.com/Feeds/");
    $scope.feedsList = $firebase(fbURL);

    $scope.save = function () {
        $scope.feedsList.$add({
            Name: $scope.feedsList.Name,
            Url: $scope.feedsList.Url,
            Description: $scope.feedsList.Description
            
        });
        $(":text").val('');
    }
});

I am not getting into the basics of AngularJS, but I will explain few things in the above code. The first line in the above code has the dependency named firebase, in simple AngularJS apps we could have skip the dependecies. Then I have the controller named feeListController, which is accompanied by a very own $scope and new param which is $firebase. Inside the controller I have a variable fbURL which holds my Firebase URL. If you see here, you willl notice that I have appended the URL with Feeds. This is because I want my data to be saved inside the Feeds section in the Firebase storage. The $scope.feedsList will hold the collection of all my entries I have in my storage. I have the save function which will save the details to my storage. Inside the Firebase's $add function, I have passed Name, Url and Description. You can pass as many params as you want. the best thing is that you don't even have to set the storage schema. As soon as you click the save button, a new entry is created in the storage. This is my storage looks like.

If you don't want to save it under the Feeds section then in the above JS file you just have to remove the Feeds from the above URL. The UI for this demo is simple, just listing down the saved JSON data in a readable form using AngularJS.

<div ng-controller="feedListController">
    <table class="table">
        <tr>
            <td>
                Name
            </td>
            <td>
                Url
            </td>
            <td>
                Description
            </td>
        </tr>
        <tr ng-repeat=" feeds in feedsList">
            <td>{{feeds.Name}}</td>
            <td>{{feeds.Url}}</td>
            <td>{{feeds.Description}}</td>
        </tr>
        <tr>
            <td>
                <input type="text" ng-model="feedsList.Name" /><br />
            </td>
            <td>
                <input type="text" ng-model="feedsList.Url" /><br />
            </td>
            <td>
                <input type="text" ng-model="feedsList.Description" /><br />
            </td>
            <td>
                <button type="submit" ng-click="save()">Add Feed</button>
            </td>
        </tr>
    </table>
</div>

Using a simple design I have displayed the details using AngularJS.

The best part is the as soon as I add a new feed details, it will get updated in the UI in real-time. Open the same page on a different browser and add a new entry. You will notice that the data is updated in real-time.

Firebase is not limited for web, it is available for iOS/OS x, Java/Android, Node.js, Ember etc. Building mobile applications using Firebase as a real-time storage will be an amazing thing to do. This is not a very good example for Firebase with AngularJS but this will get you started with Firebase and let you test a part of your application using this awesome real-time storage.

I would recommend you to look at this web application build completely on Firebase and AngularJS. 

References

 No Rating

HTML5 Video - Capture And Upload Image To Azure Storage

2. June 2013 23:46

ASP.NET Cloud HTML5 Web Azure 

I was exploring Github for some image effects/filters and I found some but still keep exploring and found an interesting plugin called face-detection. This plugin uses HTML5 getUserMedia to use your web camera only if your browser supports it. Unfortunately, IE10 still doesn’t support it. This seems cool to me, so I created a sample application to test it. This sample application will let you preview the video using your web camera and allows you to capture the image from your web camera and upload  it to the Azure storage. The javascript code for the HTML5 video I am using is from David Walsh’s post. Not every browser supports getUserMedia, so if the user is not using the browser which supports getUserMedia then you can show an alert message to the user notifying about the browser incompatibility. Here is the function which will check your browser compatibility for getUserMedia.

function hasGetUserMedia() {
 // Note: Opera is unprefixed.
 return !!(navigator.getUserMedia || navigator.webkitGetUserMedia ||
 navigator.mozGetUserMedia || navigator.msGetUserMedia);
}
 
if (hasGetUserMedia()) {
 // Good to go!
} else {
 alert('getUserMedia() is not supported in your browser');
}

To know more and explore more what you can do with getUserMedia visit HTML5Rocks. Moving further, I have a normal HTML page where initially I am placing 3 elements on the page. One is a video tag which will show the stream from the web camera attached to your machine or your laptop camera. Second, is the canvas tag which is used to show the image captured from the web camera and the third one is the normal button which will let me capture the image at my will on its click.

<video autoplay id="video" width="640" height="480"></video>
<input id="snap" type="button" value="Click" />
<canvas id="canvas" width="640" height="480"></canvas>

Notice the autoplay property in the video tag. I have set it because I don’t want the video to be frozen as soon as it started. To make the video work I am using David Walsh’s code as it is without any changes.

window.addEventListener("DOMContentLoaded", function () {
    // Grab elements, create settings, etc.
    var canvas = document.getElementById("canvas"),
        context = canvas.getContext("2d"),
        video = document.getElementById("video"),
        videoObj = { "video": true },
        errBack = function (error) {
            console.log("Video capture error: ", error.code);
        };
 
    if (navigator.getUserMedia) { // Standard
        navigator.getUserMedia(videoObj, function (stream) {
            video.src = stream;
            video.play();
        }, errBack);
    } else if (navigator.webkitGetUserMedia) { // WebKit-prefixed
        navigator.webkitGetUserMedia(videoObj, function (stream) {
            video.src = window.webkitURL.createObjectURL(stream);
            video.play();
        }, errBack);
    }
 
    // Trigger photo take
    document.getElementById("snap").addEventListener("click", function () {
        context.drawImage(video, 0, 0, 640, 480);
    });
}, false);

In the above script, you can see that it first checks if the browser or navigator has the getUserMedia support. if you see there are two conditions one for the standard and other one with the webkit prefix. In the end, the click event on the button which will let me capture the image.

When you view the application in the browser, the browser will prompt you whether you want to allow or deny the application to use the web camera.

You cannot run this just by double-clicking the file due to security reasons. You must access the web page from the server itself, either from within the Visual Studio development server, from IIS or by publishing the file on your production server. It can be annoying for many users to grant the access to the camera every time they access the page. You can save the user consent if you are using https. If you plan to stick with http, you will not be able to do this.

Looking in the address bar again, you will notice that the fav icon of the web application, changed to an animated icon. It is a little red dot which keeps on flashing notifying the user that the camera is in use. If you plan to block the access to the camera permanently or temporarily, you can then click the camera icon which is on the right hand side of the address bar.

When you click the icon you will be prompted with options to continue allow the access to the camera or always block the camera. Also if you notice the camera option, you will see that it is showing me two devices or camera attached to the machine. I build this application on my laptop so I am able to the integrated camera which is the front facing camera of my laptop and the second one is the USB camera I have plugged on my laptop’s USB port. I can choose to use any one of these. If you make any changes to these settings and click on done, you will be asked to reload the page.

This is how my web page looks.

The click button is at the bottom right hand side will let me take a snap of the feed coming from my camera. When I click the Click button, the image is captured and show on the canvas we have added on the page. This is my web page after I took the snap from my web camera.

The second image (at the bottom) is the image on the canvas that is snapped from the camera. Now I want to upload the image to my Azure cloud storage. If you look at the address bar, you’ll notice that I have used normal HTML page. It’s you wish if you want to use a simple HTML page or a web form. Both works good. The problem I am going to face with web forms is if I am going to upload the snapped image to my cloud storage or to my server repository, the page will post-back which is not cool at all. So I am going to accomplish it by making an AJAX call. Add a HTML button to your page and on its click event upload the file to the cloud.

I have the image on my canvas, therefore you just cannot retrieve the image Base64 string just like that. I am going to extract the Base64 string of the image from the canvas and pass it as a parameter. The server method will then convert the Base64 string to stream and upload it to the cloud storage. Here is the jQuery AJAX call.

function UploadToCloud() {
    $("#upload").attr('disabled', 'disabled');
    $("#upload").attr("value", "Uploading...");
    var img = canvas.toDataURL('image/jpeg', 0.9).split(',')[1];
    $.ajax({
        url: "Camera.aspx/Upload",
        type: "POST",
        data: "{ 'image': '" + img + "' }",
        contentType: "application/json; charset=utf-8",
        dataType: "json",
        success: function () {
            alert('Image Uploaded!!');
            $("#upload").removeAttr('disabled');
            $("#upload").attr("value", "Upload");
        },
        error: function () {
            alert("There was some error while uploading Image");
            $("#upload").removeAttr('disabled');
            $("#upload").attr("value", "Upload");
        }
    });
}

The line 4 in the above script is the one which will get the Base64 string of the image from the canvas. The string is now in the variable named img which I am passing as a parameter to the Upload web method in my code-behind page. You can use web service or handler or maybe a WCF service to handle the request but I am sticking with page methods. Before you can actually make use of the Azure storage, you have to add reference of the Azure Storage library from NuGet. Fire the below NuGet command to add the Azure Storage reference to your application.

Install-Package WindowsAzure.Storage

After the package/assemblies are referenced in your code, add the below namespaces.

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Blob;
using Microsoft.WindowsAzure.Storage.Auth;
using System.IO; //For MemoryStream
using System.Web.Services; //For WebMethod attribute

These all the namespace will help me to connect to my local cloud storage emulator or to my live cloud storage. This is my upload method which will upload the file to my live cloud storage.

[WebMethod]
public static string Upload(string image)
{
    string APIKey = "LFuAPbLLyUtFTNIWwR9Tju/v7AApFGDFwSbodLB4xlQ5tBq3Bvq9ToFDrmsZxt3u1/qlAbu0B/RF4eJhpQUchA==";
    string AzureString = "DefaultEndpointsProtocol=https;AccountName=[AccountName];AccountKey=" + APIKey;
 
    byte[] bytes = Convert.FromBase64String(image);
 
    CloudStorageAccount storageAccount = CloudStorageAccount.Parse(AzureString);
 
    CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
 
    CloudBlobContainer container = blobClient.GetContainerReference("img");
 
    string ImageFileName = RandomString(5) + ".jpg";
 
    CloudBlockBlob imgBlockBlob = container.GetBlockBlobReference(ImageFileName);
 
    using (MemoryStream ms = new MemoryStream(bytes))
    {
        imgBlockBlob.UploadFromStream(ms);
    }
 
    return "uploaded";
}

I have decorated this method with WebMethod attribute. This is necessary as I have to call this method from jQuery/javascript. Also the method should be public static. The AzureString  serves as the connection string for the cloud storage. You have to change the [AccountName] in the connection string to the storage account name. In my case it is propics. The method accepts the Base64 string as a parameter.

Storage is something different than that of the container and therefore I have to create a container which actually holds my images which I upload to the cloud storage. I am not explaining the code completely but it is not that complicate. To upload the file to the storage I must have a file name. The image I get from the canvas is just a Base64 string and does not have a name, so I have to give it a name. I have a random function which will generate the random string which I will be using as a file name for my image file.

//stole from http://stackoverflow.com/questions/1122483/c-sharp-random-string-generator
private static string RandomString(int size)
{
    Random _rng = new Random();
    string _chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
 
    char[] buffer = new char[size];
 
    for (int i = 0; i < size; i++)
    {
        buffer[i] = _chars[_rng.Next(_chars.Length)];
    }
 
    return new string(buffer);
}

This is a small function which I stole from Stackoverflow to generate random name for my image file. So this is it and when I click the upload button on my page.

Now if I check my cloud storage I have can see the uploaded files.

This is just a simple example to get you started. You can give some cool effects before you can actually upload the files. If you do a quick search you’ll find plenty of links which will let you give some cool effects to your pictures. I came across a brilliant plugin called filtrr. You should try it and come up with something which is awesome or may be something like Instagram. Hope this help some folks out there.

References:

Currently rated 5.0 by 1 person

Using Skydrive REST API

28. April 2013 21:52

API Cloud Microsoft Projects Utils 

I have created a wrapper class to simplify my work with REST API. I assume that you will be using a web application. First step is to create an application at dev.live.com. Once the application is created you will have the ClientID, Client Secret and Redirect domain. We will be using a ClientID and Redirect domain. Remember that the Redirect domain value cannot have localhost. You must provide a correct domain name as after successful authentication you, the user will be redirected to the domain along with the access token.

In the response you will get the access token which will be available to you for the next 3600 seconds. The access token will be in the returning URL of your domain. In my case it is http://midnightprogrammer.net/#access_token=EwBIA…. I will then retrieve the access token and pass it to my wrapper which will do the work for me.

Time to look and understand the code a bit. I have been using SkyDrive a lot for backup and sharing files and I also access SkyDrive from a web application I have built. But the code is pretty messy and needs revision. So I created a wrapper around the API.

First I need to authenticate the user with his/her live credentials with correct scopes. If scopes are wrong you will not be able to perform actions to the SkyDrive. To learn more about scopes read here. As I am working with SkyDrive I am using 2 scopes wl.signin and wl.skydrive_update. wl.skydrive_update is necessary if you want to have write access to SkyDrive. The SkyDriveAPI wrapper class has the Authenticate method which will redirect you to authentication screen of live service. Once you enter your credentials you will then be redirected to the domain (the domain you provided while you created the application) along with the access token. Here is the sample code:

public void Authenticate()
{
    string authorizeUri = AuthUri;
    authorizeUri = authorizeUri + String.Format("?client_id={0}&", _ClientId);
    authorizeUri = authorizeUri + String.Format("scope={0}&", "wl.signin,wl.skydrive_update");
    authorizeUri = authorizeUri + String.Format("response_type={0}&", "token");
    authorizeUri = authorizeUri + String.Format("redirect_uri={0}", HttpUtility.UrlEncode(_SiteURL));
    this.AuthURI = authorizeUri;
}

This is an insight of the wrapper class. In the actual scenario you will never be using it this way:

SkyDrive drive = new SkyDrive(ClientID, Site URL);
drive.Authenticate();
Response.Redirect(drive.AuthURI);

In the constructor of the SkyDrive class, the first parameter will be your ClientID and the second one will be the Site URL or the redirect domain. After this the SkyDrive class object drive (whatever the name you give) will call the Authenticate method to generate the URL with all the required parameters and store the URL in the public variable called AuthURI, which I used to redirect the user to authenticate using his credentials.

Now Once the user is authenticated you will be provided with the access token in the URL. It depends on you how you are going to fetch it. Once you fetch the access token you then need to keep it safe. To make sure that the access token remains with your API calls, save it like this:

drive.AccessToken = "EwBIA.....";

Once you have the access token, you are all good to make your first call to SkyDrive. For the simplicity, I am making a request to call get the basic user information.

User usr = drive.GetUserInfo();

Moreover you can also list all the folders in your SkyDriver with this simple call.

Folder driveFolders = new Folder();
foreach (var folder in driveFolders)
{
    Response.Write(folder.Value + " " + folder.Key +"<br/>");
}

The above code returns a Dictionary which contains the name of the folder and id of the folder. You’ll need the folder id in order to list down all the files in the folder. To list down all the files in a folder use the below code.

FolderContent content = drive.ListFolderContent("folder.77e33dba68367a16.77E33GBA68467A16!2189");
for (int i = 0; i < content.Files.Count; i++)
{
    Response.Write(content.Files[i].name);
}

It is not just the name of the files you get but you get the complete properties of the file. So you can play around with that too.

Coming to the most searched feature of SkyDriver REST API is to how to upload a file using REST API. The MSDN states that you can upload the files to SkyDriver by making HTTP POST or HTTP PUT requests. Here is how I am doing it inside my wrapper class.

public bool UploadFile(string FolderID, string FilePath)
{
    bool Uploaded = false;
    try
    {
        byte[] fileBytes = System.IO.File.ReadAllBytes(FilePath);
        string UploadURI = BaseURI + FolderID + "/files/" + Path.GetFileName(FilePath) + "?access_token=" + AccessToken;
 
        var request = (HttpWebRequest)HttpWebRequest.Create(UploadURI);
        request.Method = "PUT";
        request.ContentLength = fileBytes.Length;
 
        using (var dataStream = request.GetRequestStream())
        {
            dataStream.Write(fileBytes, 0, fileBytes.Length);
        }
 
        string status = (((HttpWebResponse)request.GetResponse()).StatusDescription);
 
        if (status.ToLower() == "created")
        {
            Uploaded = true;
        }
 
        return Uploaded;
    }
    catch (Exception)
    {
        return false;
    }
}

I am using a PUT method to upload files to SkyDrive. At this moment I have not implemented the POST method. But in near future I will. If you are using my wrapper class then the just use a single line of code to upload a file to a particular folder.

bool uploaded = drive.UploadFile("folder.77e33dba68367a16.77E33DUA68567A16!104", "D:\\walls\\flowers_442.jpg");

Calling the UploadFile method will upload a file to the folder. The first parameter in the above method is the folder id where you want to upload the folder and the second one is the local file path.

I am currently working and trying to understand the REST API and its features. I will include other functions as more and more I learn about them.

The SkyDrive API Wrapper is hosted on GitHub. Fork it, change it, customize it as you wish. And if you have or found something good, then please do let me know.

I want to give the credit to Christophe Geers for his post which gives me an insight of the API.

 No Rating

Using Cloud Power To Manage Your Source Code

26. April 2013 20:17

Freelancing Utils Cloud 

Working on multiple projects but don’t have a good way of managing the projects then you are in danger. How you manage your source code? How you collaborate and work in teams? I have been working from home and have been in constant touch with my team members and therefore we need to come up with a solution where we can have our source code controlled properly. We do have many options like Git, TFS Express or VSS or may be another source control software. But we are working from our homes and we don’t have a server with a public static IP where we can set up our source control. I was given a job to explore all the possible options on how we can manage the source code.

Well, I get to it and found out the most suitable options we can use. Here is the list of online services anyone can use totally free of charge.

Team Foundation Service (TFS)

This is the best thing as a .NET developer I have ever come across. TFS is available as a cloud service and you can use it for free. My first reaction was: How is that possible? TFS is one of the most expensive software in the whole .NET ecosystem as far as I know. The TFS cloud service is awesome and is known as stripped version of the main TFS. But there are still many features that I believe we are ever going to use. Majority of my team members are concerned with check-ins and check-outs. They don’t care about anything else but on the other hand I do. So I went on exploring and trust me it is awesome. Moreover to my surprise, this online service also have Git integration. That means if you are not familiar with TFS or if you plan to use some other IDE like Eclipse then this is the option for you. And yes, TFS does supports Eclipse. Try it out today as this is one of the most reliable service ever offered to you and that too for free. For the free account a total of 5 users can connect to the service and work concurrently in the same projects. Here is the complete list of features that is offered for the free service:

  • Up to 5 users
  • Unlimited number of projects
  • Version control (TFVC or Git)
  • Work item tracking
  • Agile planning tools
  • Feedback management
  • Build (still in preview)
  • Test management (still in preview)

Microsoft announced that they will announce their additional plans and services in 2013. Make sure that you are well versed of the date as there could be changes to the service.

Github

GitHub needs no introduction. One of the largest source code hosting site/company. GitHub is awesome and it’s desktop client for Windows is much better. But why we didn’t went for GitHub? Clearly, the service is good but sadly they offers only public repositories for open-source projects for free account holders. To have private repositories you need to shred some money from your pocket. We can afford that too for a limited time period. My team members don’t want to learn Git commands or even want to try VS extension for Git integration in Visual Studio. So we let it go. For open-source projects you should use GitHub as a huge community behind it will make your project more awesome.

BitBucket

Another great online service which we can rely on. Excellent service and has an option to choose between public and private repositories. On the other hand GitHub only offers public repos for the free account. Although, BitBucket is promising but none of my team members were interested in using Git. If you want private repositories with no TFS then you should go for BitBucket.

CodePlex

Support TFS and Mercurial but only supports open-source projects no private projects. Again for open-source projects CodePlex is good.

Bottom Line

The team sticks with TFS online but on the other hand as usual I am curious to know more about these services. I personally use all the above services. GitHub and CodePlex for open-source projects and TFS for my personal projects. One more thing, if you guys are really interested in using or trying TFS and want to explore more about it then download the stripped version of TFS 2012 know as TFS 2012 Express for free and set it up in your office or home network. I strongly recommend that you should make multiple backup sets of your source code and use online source control services along with SkyDrive and DropBox storage to be on a safer side.

 No Rating

«12»