LDMLoadbalancingSupport

Differences between revisions 7 and 8
Revision 7 as of 2007-05-07 11:06:16
Size: 4863
Editor: 195
Comment:
Revision 8 as of 2007-05-08 14:06:23
Size: 4561
Editor: 195
Comment:
Deletions are marked like this. Additions are marked like this.
Line 8: Line 8:
Line 15: Line 14:
A real life scenario is our thin client deployment at the school of Engineering at Oakland University. Initially, we had Sun Sunrays backed by the Linux port of the SunRay services. Since then we've moved away from SunRays to real X11 terminals based on LTSP support in Ubuntu. Some of terminals are dedicated X11 terminals that were purchased at the beginning of this calendar year other terminals are desktop machines that can be booted into "thin client mode" via a PXE boot menu. Rudolfo has 400 thin clients and four application servers. He would like to balance the load of login sessions across the four application servers to support up to 360 concurrent logins.
Line 17: Line 16:
The one feature that was lost in the transition from SunRays to LTSP is the support for multiple servers. Right now thin clients all run of of one server, out of the two available. This summer time the school of Engineering is getting three more servers. Here at the university a lot of the applications that our users run are engineering applications (math, physics simulations & CAD) which tax the systems, which is why we need multiple servers and the ability to grow vertically. Margot teaches computer science at a school in a rural area, and she has 17 recycled thin clients available for her lab, but she does not have a server available to support the 17 TCs. Instead, she has scraped together three desktop class machines for this purpose, and she wants to be able to support all 17 TCs with concurrent logins.
Line 25: Line 24:
Line 27: Line 25:
Line 31: Line 28:

=== Scope and Use Cases ===
Line 41: Line 36:
It's very hard to pick the "best" server. Right now there are two ways.
 1. Throw away the non responding servers, pick a random one from the remaining servers (random tends to make a good choice a high percentage of the time).
 1. The other option provides the administrator some tunable parameters like to ignore servers that are swapping, ignore servers that take x ms to respond, ignore servers that have a load higher then x (normalized to number of processors). Random is uses on the remaining servers. If no candidates available, it pick the random server from the list of servers that responded.
It's very hard to pick the "best" server. Here are some options:
 1. Implemented: The other option provides the administrator some tunable parameters like to ignore servers that are swapping, ignore servers that take x ms to respond, ignore servers that have a load higher then x (normalized to number of processors). Random is uses on the remaining servers. If no candidates available, it pick the random server from the list of servers that responded.
 1. Implemented: Throw away the non responding servers, pick a random one from the remaining servers (random tends to make a good choice a high percentage of the time).
 1. Unimplemented: Each client uses the same server (dynamically eliminating unresponsive servers from the list of what's available) every session. This can be done by hashing the TC MAC address and taking a modulus into the list of available servers.
 1. Unimplemented: The user is presented with a list of available servers (preferably with some stats about the server load) and chooses the preferred server. One server can be automatically recommended. (This is a variation of the first option, above.)
  * Status info may include: "Down", "Idle", "Busy", "Swapped"
  * The chooser should auto-refresh periodically.

 * We will modify ltsp-update-sshkeys to make configuration of additional servers easy, it will check if /etc/ltsp/extraservers exists, if so it will connect to the listed servers, retrieve the keys and append them to the ssh_known_hosts file in the chroot.
Line 46: Line 47:
Currently there is no support for the user to pick their own server, everything is done automatically (auto-magically). What is needed is a generic way for greeters to be able to retrieve the data about the servers (their advertised stats) and allow for overriding the default choice if so desired. A user would have an additional button in the greeter available to bring up a server browser. The browser would list the servers available and their status. The status should be presented in a user readable format (eg. not load avgs). It could me multiple levels ala:
 * Down
 * Idle
 * Busy
 * Swapped
Details of these levels can be hammered out later. This picker window should also auto-refresh periodically.
Line 64: Line 58:

Summary

Right now with LTSP in ubuntu there is no good solution for using more then one server for thin client deployments.

Rationale

Larger rollouts of LTSP will be possible when it becomes easy to add any number of additional application servers to the network.

Use Cases

Rudolfo has 400 thin clients and four application servers. He would like to balance the load of login sessions across the four application servers to support up to 360 concurrent logins.

Margot teaches computer science at a school in a rural area, and she has 17 recycled thin clients available for her lab, but she does not have a server available to support the 17 TCs. Instead, she has scraped together three desktop class machines for this purpose, and she wants to be able to support all 17 TCs with concurrent logins.

Scope

This spec is concerned with load balancing ldm login sessions across multiple application servers.

Design

Summary

Rationale

Implementation

The first iteration of the implementation is available at: http://ltsp.mindtouchsoftware.com/ltsp-loadbalance.

It consists of a stand alone server component (ltsp-server-advertise) that is able to provide clients with statistics to resources available. This is implemented in python as a daemon. The application waits for incoming inquisitions on port (currently 377). It returns a xml document with the statistics and closes the socket.

A client component is mostly self contained in it's own (mostly self contained) module (pickserver.py). This module is integrated with ldm where it periodically (right now the default is 5 seconds, a more sensible would be every 30 seconds or a minute) queries the servers from a predefined list of servers. As soon as the user logs in the best server is already available. Changes required to ldm are minimally invasive since most of the code (as state above) is split into a separate module.

It's very hard to pick the "best" server. Here are some options:

  1. Implemented: The other option provides the administrator some tunable parameters like to ignore servers that are swapping, ignore servers that take x ms to respond, ignore servers that have a load higher then x (normalized to number of processors). Random is uses on the remaining servers. If no candidates available, it pick the random server from the list of servers that responded.
  2. Implemented: Throw away the non responding servers, pick a random one from the remaining servers (random tends to make a good choice a high percentage of the time).
  3. Unimplemented: Each client uses the same server (dynamically eliminating unresponsive servers from the list of what's available) every session. This can be done by hashing the TC MAC address and taking a modulus into the list of available servers.
  4. Unimplemented: The user is presented with a list of available servers (preferably with some stats about the server load) and chooses the preferred server. One server can be automatically recommended. (This is a variation of the first option, above.)
    • Status info may include: "Down", "Idle", "Busy", "Swapped"
    • The chooser should auto-refresh periodically.
  5. We will modify ltsp-update-sshkeys to make configuration of additional servers easy, it will check if /etc/ltsp/extraservers exists, if so it will connect to the listed servers, retrieve the keys and append them to the ssh_known_hosts file in the chroot.

Unimplemented Pieces

Because the existing code for querying servers is in Python and the next generation greeter is written in C it's impossible to the already written module directly. Also, there's plans for future non Gtk greater (Qt comes to mind). The solution for this is to use a IPC mechanism between ldm and the greeter where ldm can provide information about the servers to the greeter. The mechanism that comes to mind are POSIX message queues.

TODO: protocol structure

Outstanding Issues

  • Document the xml schema.
  • Clean up the code of the server component.
  • Even thought the default right now is to scan the server list every 5 seconds. That behavior will become optional. The default behavior will be changed to only query the servers once when the user "logs in".
  • Greeter integration.
  • ltsp-update-sshkeys needs to import keys from multiple servers.

BoF agenda and discussion


CategorySpec

LDMLoadbalancingSupport (last edited 2008-08-06 16:14:46 by localhost)