Monday 30 June 2008

GLOSS session on Open High Availability Cluster


Today we had a session on OHAC. It was a brainstorming session in fact! Till today I had a different view point about clusters and parallel processing. But now Abhishek had explained asymptotic strategies that were involved in clusters. He started off from scratch, explaining even the basics. He started his lecture by playing an “open” video which was designed using Blender software with the help of super computers’ service provided by network.com. I could understand that there would be a heavy loss of data and currency when we have a crash in servers or say, when a disaster strikes.
Interrupt received!!!
Oops! No donut for you! Wait…
For the non techies reading this I’d like to define some of these concepts as I’m eager to let everybody know how things are working in big companies such as Sun, Google (Gmail), Yahoo and others. Actually clusters are two or more systems that execute a applications given to them.In case of a failure to one cluster, the other cluster tolerates and gives a fail safe execution for the application.
Take this situation, when you are making a money transaction through an internet service provided by a bank and suddenly the server which provided you the service failed! Boom!! You won’t know what the consequences may be, right? Again, think of a situation when your mail service provider’s server has crashed once for all. Are you gonna make use of his services from then on? In the same manner think that you want to upload a video and you don’t have youtube functioning, would be very disgraceful right? In such situations you can have high availability clusters. When an application running and some misbehavior occurs in that cluster, it can be transferred with a great ease to the other cluster without making the end user feel that there had been a crash or a change of a cluster.
In this regard, you’ve operating system talking to the hardware and over it, the cluster infrastructure in order to distribute the work properly. The applications are safely distributed into the clusters during a failover by special applications called as Agents.
The most exciting part of the session was the simulation of this architecture into the opensolaris machine. Abhishek had worked to create zones. These zones are nothing but an instance of the operating system’s kernel. He launched a song using real player in one zone and ordered it shutdown. To everybody’s surprise, the real player failed over to another zone and started running successfully.
I was really on this topic even after the session had ended. And now I’ve got a few pdf’s on it. Very eager to go through them! So, rest is next...
To the techies, if you wish to have a view of the presentation, do mail me. I’ll send a copy of it to you.

5 comments:

Gowri said...

For your kind information, I am not a computer savvy, like you.Really enjoyed and you had given a beautiful explanation, understandable version for a lay man/woman to know about the recent trends in technology. Keep up with the good work !!

**Kumar Abhishek** said...

By the way Sanjeev, we refer group of nodes as a cluster and its nodes on which the application run in the cluster. So in case of a failure, the application running in the cluster is moved to a functioning node and not a 'cluster' as you have mentioned. But I am glad you guys could understand this much! :) Keep up the spirit!

-Abhishek

Insanity Rulz said...

Looking forward to you becoming an active contributer to OHAC

ss said...

THAT WAS A GOOD AND INFORMATIVE BLOg!!

Unknown said...

Great Blog. How can I paste your cool widget clock into my sales blog ?
Thanks,
Robert Reagan
Sun Microsystems
Minnesota, U.S.A.