Free e-Books From Microsoft And Telerik That You Should Be Reading

14. June 2015 08:15

.NET Framework Cloud Microsoft   0 Comments

I am involving myself in reading book these days rather than reading blogs. Here are 2 free e-books, one from Microsoft Press with a valuable insights on Cloud, Fundamental of Azure and the other from Telerik, The Developer's Guide To The New .NET. I encourage you to read both the books, but if you are not about cloud then you should read the book from Telerik.

Microsoft Press - Fundamentals of Azure (Download)

From Microsoft Press

The “Microsoft Azure Essentials” series helps you advance your technical skills with Microsoft Azure. “Microsoft Azure Essentials: Fundamentals of Azure” introduces developers and IT professionals to the wide range of capabilities in Azure. The authors—both Microsoft MVPs in Azure—present conceptual and how-to content for seven key areas and describe management tools and business cases.

Telerik - The Developer's Guide To The New .NET (Download)

From Telerik

This ebook is no fluff–just a developer-to-developer breakdown of what’s in store for .NET in 2015. Included are code snippets and step-by-step tutorials on handy new features and techniques.

Download for free, and learn more about:

  • Visual Studio 2015
  • .NET core goes open source
  • Cross platform development with .NET
  • C# 6.0
  • Roslyn
  • Windows 10
 No Rating

How Generic The Code Can Be - Resolving Type T at Compile Time

7. June 2015 08:07

.NET Framework C#  2 Comments

It is fun working with Generics, but sometimes it just won't work as I intended it to work. While working on a personal project, I was trying to write the code as generic as possible so I can re-use the functions or extension methods in my other projects as well. But it is not always the case, things will not always work as you want it to be. Here is the case of what I was trying to do.

I am working extensively on JSON in this project, so I thought of creating an extension method for the classes I want to serialize to JSON and save it to the file system. Creating the extension method was not a problem, but it turned out to my surprise that I just can’t use the extension method on type T. If I would have paid attention to the detail here before writing the function, I would have noticed the problem. Before I tell you what I did here, here is the extension method which will serialize class to JSON string.

public static string ToJSON<T>(this T Entity) where T : class
{
    return JsonConvert.SerializeObject(Entity);
}

As I want only classes to be serialize, I have a constraint in place that assure me that this extension method should be used when the T is of type Class. I agree with the fact the creating an extension method just for a simple work is an over kill but I am just lazy about writing this line again and again. The application I am working on can support multiple databases and for that I am using Dependency Injection (DI). I wrote this function keeping in mind that I just have to pass the model name i.e. class object and this function will then save the JSON string from the serialized class to the file system.

public bool Save<T>(T Entity, string Path) where T : class
{
    bool IsSaved = false;
    try
    {
        if (!Path.IsNullOrEmpty())
        {
            File.AppendAllText(Path, Entity.ToJSON());
            IsSaved = true;
        }
        return IsSaved;
    }
    catch (Exception x)
    {
        throw x;
    }
}

The above code will not compile. This is because, there is no way for the compiler to know the type of Entity object we are passing. The constraints has nothing to do with it. It is just the compiler who has to know what kind of type it is dealing with. In this case we have a type T which is not resolved at compile time and hence the ToJSON() extension method I am using above, is an assumption from my part that Entity is a class, is not going to work. The important point here is that T is going to resolve at runtime and before that if we try to use the extension method (meant only for class) on type T, the intellisense will never show the ToJSON() extension method in the list.

Now to overcome this problem, we have to make some changes to the above code in order to have the information which the compiler can use and allow us to make use of the ToJSON()extension method. As there is a constraint in place for this function which will only allow us to pass class object as a first parameter, but the case is still the same. The Save function will never know what is the actual type of the Entity. Therefore we can use the Entity object and get the name of the Class like so.

string className = Entity.GetType().Name;

This gives us the name of the class and now we can use it to get the actual type of T and convert it to that type (which will be class). Once we have a class we can then use our ToJSON() extension method. Here is the updated code.

public bool Save<T>(T Entity, string Path) where T : class
{
    bool IsSaved = false;
    try
    {
        if (!Path.IsNullOrEmpty())
        {
            if (Entity.GetType().Name == "Board")
                File.AppendAllText(Path, Convert.ChangeType(Entity, typeof(Board)).ToJSON());
            else if (Entity.GetType().Name == "Story")
                File.AppendAllText(Path, Convert.ChangeType(Entity, typeof(Story)).ToJSON());

            IsSaved = true;
        }
        return IsSaved;
    }
    catch (Exception x)
    {
        throw x;
    }
}

This code will compile and saves the JSON string to the specified path on execution. Notice the Convert.ChangeType function. It’s simple function which you can use to convert one type to your desired type. The only drawback here will be that you have to come here and update the function whenever you have a type of class added in the project which you want to serialize to JSON. The code is still pretty sleek according to me for the type of task I am trying to accomplish.

The type T is very useful, but sometimes it may not solve all your problem.

 No Rating

Recurring Tasks Inside ASP.NET Applications Using HangFire

15. June 2014 22:46

ASP.NET ASP.NET MVC C# SQL Server Web  1 Comment

This is open-source at its best. Running background task to work in context with ASP.NET was and is still a big deal for few developers. I user QueueUserWorkItem to schedule emails when a new comment is added on my blog. This makes sure that the UI is responsive and the user can close or navigate to other post. I have been working on enterprise applications for many years now and most of the long running tasks are running in the background i.e. windows services.

HangFire is not limited to ASP.NET applications, you can even use it in your console applications.

HangFire is an open-source project which allows us to run recurring tasks within the ASP.NET application. No need of scheduling tasks and windows services. Everything will be within the ASP.NET application. When a new comment is added on my blog, an email is sent to my inbox as a notification to moderate it. In a normal scenario it will take a bit more than a normal time to add a comment because an email is also being sent to my inbox. To overcome this problem, I queued the mail process in the background like so:

bool commentSave = _db.AddComment(comment);
if (commentSave)
{
    System.Threading.ThreadPool.QueueUserWorkItem(s=> BlogEmail.SendEmail(comment));
    return Json(new { message = "Thanks for your comment. The comment is now awaiting moderation" });
}
else
    return Json(new { message = "There is an error while saving comment. Please try again later" });

As soon as a comment is added, user will be prompted that comment is added in the DB but the process of sending the email is scheduled in the background. But this approach has a drawback. What if the email sending is failed? As the admin of my blog, will I be able to see the status of the process? HangFire resolves all these questions and it comes with an awesome HangFire monitor which displays the status of all the background tasks in real-time. I will discuss about the HangFire monitor later in this post, but first let's get started with HangFire.

Installing HangFire

HangFire is available on NuGet. Firing the below command will automatically add the references in your project and takes care of all the configuration.

Install-Package HangFire
Attempting to resolve dependency 'HangFire.SqlServer (= 0.9.1)'.
Attempting to resolve dependency 'HangFire.Core (= 0.9.1)'.
Attempting to resolve dependency 'Common.Logging (= 2.1.2)'.
Attempting to resolve dependency 'Newtonsoft.Json (= 5.0.0)'.
Attempting to resolve dependency 'ncrontab (= 1.0.0)'.
Attempting to resolve dependency 'Dapper (= 1.13)'.
Attempting to resolve dependency 'HangFire.Web (= 0.9.1)'.
Attempting to resolve dependency 'CronExpressionDescriptor (= 1.10.1)'.
Attempting to resolve dependency 'WebActivatorEx (= 2.0.1)'.
Attempting to resolve dependency 'Microsoft.Web.Infrastructure (= 1.0.0.0)'.
Installing 'Common.Logging 2.1.2'.
Successfully installed 'Common.Logging 2.1.2'.
Installing 'ncrontab 1.0.0'.
Successfully installed 'ncrontab 1.0.0'.
Installing 'HangFire.Core 0.9.1'.
Successfully installed 'HangFire.Core 0.9.1'.
Installing 'Dapper 1.13'.
Successfully installed 'Dapper 1.13'.
Installing 'HangFire.SqlServer 0.9.1'.
Successfully installed 'HangFire.SqlServer 0.9.1'.
Installing 'CronExpressionDescriptor 1.10.1'.
Successfully installed 'CronExpressionDescriptor 1.10.1'.
Installing 'WebActivatorEx 2.0.1'.
Successfully installed 'WebActivatorEx 2.0.1'.
Installing 'HangFire.Web 0.9.1'.
Successfully installed 'HangFire.Web 0.9.1'.
Installing 'HangFire 0.9.1'.
Successfully installed 'HangFire 0.9.1'.
Adding 'Common.Logging 2.1.2' to HangfireDemo.
Successfully added 'Common.Logging 2.1.2' to HangfireDemo.
Adding 'ncrontab 1.0.0' to HangfireDemo.
Successfully added 'ncrontab 1.0.0' to HangfireDemo.
Adding 'HangFire.Core 0.9.1' to HangfireDemo.
Successfully added 'HangFire.Core 0.9.1' to HangfireDemo.
Adding 'Dapper 1.13' to HangfireDemo.
Successfully added 'Dapper 1.13' to HangfireDemo.
Adding 'HangFire.SqlServer 0.9.1' to HangfireDemo.
Successfully added 'HangFire.SqlServer 0.9.1' to HangfireDemo.
Adding 'CronExpressionDescriptor 1.10.1' to HangfireDemo.
Successfully added 'CronExpressionDescriptor 1.10.1' to HangfireDemo.
Adding 'WebActivatorEx 2.0.1' to HangfireDemo.
Successfully added 'WebActivatorEx 2.0.1' to HangfireDemo.
Adding 'HangFire.Web 0.9.1' to HangfireDemo.
Successfully added 'HangFire.Web 0.9.1' to HangfireDemo.
Adding 'HangFire 0.9.1' to HangfireDemo.
Successfully added 'HangFire 0.9.1' to HangfireDemo.

I am using HangFire with ASP.NET MVC application. Here are the few things that you need to configure before you dive in. When installing HangFire via NuGet, it adds HangFireConfig.cs under App_Start folder. HangFire supports Redis, SQL Server, SQL Azure or MSMQ. I am using SQL Server in this demo. The reason we require this storage because it is being used by the HangFire monitor to display the real-time data of the jobs. To configure HangFire to use SQL Server, open HangFireConfig.cs file and change the connection string as per your SQL Server installation.

JobStorage.Current = new SqlServerStorage(
    @"Server=GHOST\SERVER; Database=Jobs;user id=sa; password=pass#w0rd1;");

When the application first starts, all required database objects are created. 

You can find the scripts inside the downloaded package HangFire.SqlServer.0.9.1\Tools\install.sql. The jobs and monitor will be using this database to show me the real-time status of the jobs running in the background. To view the HangFire monitor, simply navigate to http://<sitename>/hangfire.axd. As it is a handler, you can see it in your web.config file. Let's see it in action:

The navigation pane on the right, lets you see the jobs and their status. It let's you even see the queues which are currently running. 

Scheduling the Jobs

Scheduling jobs using HangFire is easier then I thought it would be. Talking about the same example from my blog which sends email in my inbox when a new comment is added. If I want to schedule the mail send process as a background job I can do it easily using the BackgroundJob class.

bool commentSave = _db.AddComment(comment);
if (commentSave)
{
    BackgroundJob.Enqueue(() => BlogEmail.SendEmail(comment));
    return Json(new { message = "Thanks for your comment. The comment is now awaiting moderation" });
}
else
    return Json(new { message = "There is an error while saving comment. Please try again later" });

As I require it to run only once I just have queue it using the BackgroundJob.Enqueue() method. I can also delay the execution of the job using the Schedule method of the BackgroundJob class.

bool commentSave = _db.AddComment(comment);
if (commentSave)
{
    BackgroundJob.Schedule(() => BlogEmail.SendEmail(comment), TimeSpan.FromMinutes(60));
    return Json(new { message = "Thanks for your comment. The comment is now awaiting moderation" });
}
else
    return Json(new { message = "There is an error while saving comment. Please try again later" });

What if the email sending is failed? The SendMail method throws an exception that the mail sending is failed. HangFire will handle this by default and it will retry automatically 3 more times after a consecutive delay after each retry. But if I want to retry it more than 3 times then I can make use of the AutomaticRetry attribute and pass the number of retries I want, something like this:

[AutomaticRetry(Attempts = 5)]
public bool SendEmail(Comment comment)
{
    //Email code
}

Let's say if I do have another method that I want to run every minute (it's an overkill for my blog) then I will make use of RecurringJob class.

RecurringJob.AddOrUpdate(() => Storage.PunchIt(), Cron.Minutely);

Cron enum allows me to schedule a job daily, weekly, monthly, yearly, hourly and minutely. Now as my job is schedule in the background, time to take a look at HangFire monitor.

I have no idea why my Recurring Jobs screen is showing Next and Last execution time as 44 years ago. But you can see the Succeeded Jobs with a minute interval (#5 and #4). HangFire uses persistent storage and therefore you can trigger the job at your will or remove it when you feel like it. That means you configure the job in the code and manage it from the HangFire monitor.

What else you can do with HangFire

I just showed you how easy it can be scheduling jobs using HangFire. But there are more advanced topics which you should be looking into for more complex implementation. HangFire supports logging, dependency injection using Ninject, multiple queue processing and more.

References

Currently rated 3.3 by 6 people

Building Real Time Applications With Firebase And AngularJS

17. May 2014 23:25

API Cloud Web  0 Comments

Building real-time applications with SignalR is easy. But building real-time applications with JavaScript will be a problem if you are not aware of Firebase. I first heard of it when I was learning AngularJS. The official AngularJS site has the example where you can add, edit and delete the detials in real-time. This example uses Firebase as the storage and updates the UI in real time. 

The data is stored in JSON format and accessing it using jQuery makes it very easy. Firebase API is pretty powerful and lets you get started in minutes. To get started with Firebase, you have to first create a new account with Firebase. It's a paid service but for now you can opt for a developer account. As soon as your account is created you will be redirected to your dashboard where you can create a new application or use the one which is there by default from Firebase. Take the note of the URL ending with firebaseIO.com, we will be needing it later on at the time of implementing.

Clicking on Manage App button will take you to the details page of the application. Where you can view/edit/import/export the data in the JSON format, authentication from Facebook, Twitter, Google etc, simulate read/write operation and more. You can see the sample data in JSON format I have been testing in the below screenshot. If you already have a JSON file, you can directly import it from the dashboard using the Import JSON feature.

For this example I am using jQuery and AngularJS. If you wish you can just stick to jQuery. Create a new HTML page and add Firebase JS references in the head section. I am using AngularJS so I am going to add AngularJS reference as well.

<script src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.2.min.js" type="text/javascript"></script>
<script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.15/angular.min.js"></script>
<script src="https://cdn.firebase.com/js/client/1.0.11/firebase.js"></script>
<script src="https://cdn.firebase.com/libs/angularfire/0.7.1/angularfire.min.js"></script>

You get the first three references but the last reference is the AngularFire which is a quick way to work with Firebase applications with AngularJS. This is AngularJS code with firebase dependency.

var feedApp = angular.module('feedDataApp', ['firebase']);

feedApp.controller('feedListController', function ($scope, $firebase) {
    var fbURL = new Firebase("https://scorching-fire-0000.firebaseio.com/Feeds/");
    $scope.feedsList = $firebase(fbURL);

    $scope.save = function () {
        $scope.feedsList.$add({
            Name: $scope.feedsList.Name,
            Url: $scope.feedsList.Url,
            Description: $scope.feedsList.Description
            
        });
        $(":text").val('');
    }
});

I am not getting into the basics of AngularJS, but I will explain few things in the above code. The first line in the above code has the dependency named firebase, in simple AngularJS apps we could have skip the dependecies. Then I have the controller named feeListController, which is accompanied by a very own $scope and new param which is $firebase. Inside the controller I have a variable fbURL which holds my Firebase URL. If you see here, you willl notice that I have appended the URL with Feeds. This is because I want my data to be saved inside the Feeds section in the Firebase storage. The $scope.feedsList will hold the collection of all my entries I have in my storage. I have the save function which will save the details to my storage. Inside the Firebase's $add function, I have passed Name, Url and Description. You can pass as many params as you want. the best thing is that you don't even have to set the storage schema. As soon as you click the save button, a new entry is created in the storage. This is my storage looks like.

If you don't want to save it under the Feeds section then in the above JS file you just have to remove the Feeds from the above URL. The UI for this demo is simple, just listing down the saved JSON data in a readable form using AngularJS.

<div ng-controller="feedListController">
    <table class="table">
        <tr>
            <td>
                Name
            </td>
            <td>
                Url
            </td>
            <td>
                Description
            </td>
        </tr>
        <tr ng-repeat=" feeds in feedsList">
            <td>{{feeds.Name}}</td>
            <td>{{feeds.Url}}</td>
            <td>{{feeds.Description}}</td>
        </tr>
        <tr>
            <td>
                <input type="text" ng-model="feedsList.Name" /><br />
            </td>
            <td>
                <input type="text" ng-model="feedsList.Url" /><br />
            </td>
            <td>
                <input type="text" ng-model="feedsList.Description" /><br />
            </td>
            <td>
                <button type="submit" ng-click="save()">Add Feed</button>
            </td>
        </tr>
    </table>
</div>

Using a simple design I have displayed the details using AngularJS.

The best part is the as soon as I add a new feed details, it will get updated in the UI in real-time. Open the same page on a different browser and add a new entry. You will notice that the data is updated in real-time.

Firebase is not limited for web, it is available for iOS/OS x, Java/Android, Node.js, Ember etc. Building mobile applications using Firebase as a real-time storage will be an amazing thing to do. This is not a very good example for Firebase with AngularJS but this will get you started with Firebase and let you test a part of your application using this awesome real-time storage.

I would recommend you to look at this web application build completely on Firebase and AngularJS. 

References

 No Rating

Turn Your Google Drive Folder In An Online Album Viewer

4. May 2014 15:09

API ASP.NET MVC C# Projects Web  4 Comments

Working with Google Drive API is fun. The documentation is good and will let you get started with the API pretty quickly. I have my whole complete family album on OneDrive, but the API is still a bit hard to use as compared to Google Drive API or maybe it is just for me. But while playing around with Drive API I ended up creating a small razor based web page application which will allow you to show your folder as an online web album. I have the source code at Github.

To get started register your application on the Google Developer Console. Here is a complete guide on how to use the Google APIs.

Copy the API keys and save it on the notepad. The basic usage example on the Google Drive API reference page uses a file called client_secrets.json which actually holds the API key. You can find this file in the bin folder.

{
  "installed": {
    "client_id": "524339914714.apps.googleusercontent.com",
    "client_secret": "_Lq8D_Ai-T2IbhU8m2zXlQa"
  } 
}

You also have to enable the Drive API before you start using it.

I created a wrapper around the Google Drive API so I can use it in my other projects. Here is the complete code of my GoogleDrive class.

using System;
using System.Collections.Generic;
using System.Web;
using Google;
using Google.Apis.Auth.OAuth2;
using Google.Apis.Download;
using Google.Apis.Drive.v2;
using Google.Apis.Drive.v2.Data;
using Google.Apis.Logging;
using Google.Apis.Services;
using System.Threading.Tasks;
using System.Threading;
//using Google.Apis.Upload;

public class GoogleDrive
{
    DriveService service = null;
    public GoogleDrive()
    {
        Run().Wait();
    }


    private File GetFileByID(string fileID, DriveService service)
    {
        File file = service.Files.Get(fileID).Execute();
        if (file.ExplicitlyTrashed == null)
            return file;
        return null;
    }

    public string GetFolderID(string FolderName)
    {
        FilesResource.ListRequest request = service.Files.List();
        request.Q = "title = '" + FolderName + "'";
        FileList files = request.Execute();
        return files.Items[0].Id;
    }
    public List<File> ListFolderContent(string FolderID)
    {
        ChildrenResource.ListRequest request = service.Children.List(FolderID);
        List<File> files = new List<File>();
        //request.Q = "mimeType = 'image/jpeg'";

        request.Q = "'" + FolderID + "' in parents";

        do
        {
            try
            {
                ChildList children = request.Execute();

                foreach (ChildReference child in children.Items)
                {
                    files.Add(GetFileByID(child.Id, service));
                }

                request.PageToken = children.NextPageToken;

            }
            catch (Exception e)
            {
                request.PageToken = null;
            }
        } while (!String.IsNullOrEmpty(request.PageToken));
        return files;
    }

    public List<File> ListRootFileFolders()
    {
        List<File> result = new List<File>();
        FilesResource.ListRequest request = service.Files.List();
        do
        {
            try
            {
                FileList files = request.Execute();

                result.AddRange(files.Items);
                request.PageToken = files.NextPageToken;
            }
            catch (Exception e)
            {
                request.PageToken = null;
            }
        } while (!String.IsNullOrEmpty(request.PageToken));
        return result;
    }

    public byte[] DownloadFile(string fileID)
    {
        File file = service.Files.Get(fileID).Execute();
        var bytes = service.HttpClient.GetByteArrayAsync(file.DownloadUrl);
        return  bytes.Result;
    }


    private async Task Run()
    {
        try
        {
            GoogleWebAuthorizationBroker.Folder = "Files";
            UserCredential credential;
            using (var stream = new System.IO.FileStream("client_secrets.json",
                System.IO.FileMode.Open, System.IO.FileAccess.Read))
            {
                credential = await GoogleWebAuthorizationBroker.AuthorizeAsync(
                    GoogleClientSecrets.Load(stream).Secrets, DriveConstants.Scopes, "user", CancellationToken.None);
            }

            var drvService = new DriveService(new BaseClientService.Initializer()
            {
                HttpClientInitializer = credential,
                ApplicationName = "Drive API Sample",

            });

            if (drvService != null)
            {
                service = drvService;
            }
        }
        catch { }
    }
}

The Storage class uses Biggy for the document storage. You have to manually add the names of the folder (considered as albums) on settings page. This will get added/appended in settings.json file in the Data folder which is in the web root of DriveGallery. Here is my settings file.

{"AlbumName":"Atlassian"}
{"AlbumName":"DivX"}
{"AlbumName":"Envato"}
{"AlbumName":"Disqus"}

Settings page from where I can add the names of folders/albums. No view and delete options as of now, but I will soon add it.

The added albums will be listed on the main/default page. The user will then have to click the name of the album to view the contents which should only images. At the moment there is no check which prevents it to show other files rather than only image files.

Google Drive is slow when it comes to handle high amount of data. Google Drive storage is not like Azure or Amazon storage. It’s different and will not get you large results in appropriate time. If you want to use Drive Gallery make sure that you don’t use more than 25-50 images in a folder. Querying 25-50 files in a single go will be long but not more than if you have more than 50. Better option is to break your folder in parts like volume I & II or Part 1 or 2.

I will try to make it look more attractive but for the moment it solves the purpose for me. Click on the View button to view the album. The View page accepts a query string name, which has the name of the folder/album. Google Drive API does not understand the name of the folder, so I have to get the folder id from the name. I will use the GetFolderID method of the GoogleDrive class which gives me the folder id. The received folder id is then passed to ListFolderContent method which will list all the files in the folder.

Please note that, for the moment there is no check to retrieve only images from the folder. GoogleDrive.cs class is a generic version of the class I am using in my other projects. You just need to be sure that you have only images in the folder and not any other file.

Click the View button to view the full size image in the browser else click the Download button to download it directly. Basically Drive API does not have any property which holds the value of the URL to view the image online. So I tweaked the thumbnail URL in the ViewURL property so I can view the images online without having to download it.

public class Albums
{
    public string Id { get; set; }
    public string Title { get; set; }
    public string AlbumName { get; set; }
    public string DownloadURL { get; set; }
    public string ViewURL
    {
        get
        {
            return ThumbnailLink.Replace("=s220", "=s4000");
        }
    }
    public string ThumbnailLink { get; set; }
}

Taking the ThumbnailLink property and replace the =s220 to =s4000 does the trick.

Note: The URLs are not permanent, it will change when the request expires.

Properties of the Album class:

  • Id: File/Image id.
  • Tilte: Title of the image/file.
  • AlbumName: Name of the album/folder.
  • DownloadURL: Download URL of the image/file. Original Drive API property name: DownloadUrl.
  • ViewURL: URL which allows user to view the image in the browser.
  • ThumbnailLink: URL of the thumbnail image. Original Drive API property name: ThumbnailLink.

The duration of the operation depends on the amount of images you have in your folder. The lesser the number of images the fast the query will get executed. Here just for fun I have used Biggy JSON store to save the content details of the folder. If you want to remove it from your implementation, please do so. You may have to change some functions in order to get all other things working fine.

You can download the complete source code from Github. I am still working on it to make it more flexible and good looking. As I am not a designer and I seriously lack when it comes to web designs. Thanks to Twitter bootstrap to make it look a bit good.

Currently rated 5.0 by 1 person

Visit blogadda.com to discover Indian blogs Computers Blogs