Thin clients are easy to make(well almost thin clients)
Use linux and X windows.
X windows is a NOT a GUI, it's a network protocol. Any application that runs on X windows can run over a network easily (as long as they aren't something needing 3-d hardware acceleration like quake)
The easiest way is to get the cheapest possible computers. You can put together good clients using 400 dollar PC's with cheapest harddrives possible that would be great. Make them all identical hardware so that you can make administration easiest. If you don't you will have to use something like Kudzu to automaticly configure the hardware at the first bootup, pretty easy.
So for the clients you set up one client, install a very basic X windows setup. (In X windows terminology they are "servers", the main machine that lends it's CPU time to run the apps is a "client", X is kinda old and predates the concepts of internet servers and clients)
You set up
xdm (x graphical login manager) to allow people to easily log in to the remote X client (remember the server is the one were people are using, the X client is the thing were the apps run)
Then once you get one machine set up, you use "dd" command to make a image of the harddrive and transfer the image to the other terminal machines and then use the dd command to put image on a new harddrive. Or you can use ghost, which does the same thing, but is $$.
Now that's not technically "thin clients" but that is infinately simplier.
For the Terminal server (or X client) use a nice expensive SMP machine. Put 2 nics in it. One nic goes to the outside world thru a router/firewall, and has no X stuff running on it. The other nic (or several if you have a big farm of clients) is going to the lan full of X clients. Make it a gigabyte nic card, hook it up to a nice switch. Use 100MB/s network in full duplex mode for all the clients to save money.
Now this setup has several advantages.
1. Easy administration. Basicly disposable PC's. You have a image for the harddrive a new PC can be added in minutes, if you use apps like kudzu and extremely modulized kernel, then even computers with different hardware then the original machine can be set up in minutes. (as long as you pick machines with good driver support, such as motherboards with VIa or Intel chipsets).
2. Easy administration, part2. No need for a network-based domain. Since all logins are handled via XDM on the main server, then administration isn't any more complicated then a single PC with a bunch of users on it. Very secure, too, only one machine has access to any other network.
3. Wide base of Linux applications. No need to develop special Thin client apps, everything that runs on X will pretty much run over the network completely transparently. Only apps that require hardware acceleration will be affected. You have things like crossover plugins, which are propriatary apps based on Wine that run all your favorite windows apps on X. (Like IE or Outlook.)
4. Cheap and well tested.
5. The lifetime of the client machines is VERY long. Probably be usefull 5-6 years from now, no problem.
6. Infinately scalable main machine. Using linux you have choice of hardware from Cheapy Athlon XP/MP machines, MAC G5s, Cluster of machines, All the way up to pSeries 32-way Power4+ 1.7ghz Linux servers. Even zSeries mainframes if you want to spend the BIG BUCKS.
If you want to get fancy like me, I am setting up a system were I have a central X client, but am using OpenMosix. That way I can set it up in a cluster enviroment were active proccessing threads from the main machine will be migrated to machines that have lower loads on them. For instance if I have 8 users using Mozilla on the main X client, the various Mozilla apps will migrate their proccesses to idle machines thus effectively making my entire network of x86 machines one gigantic SMP machine.
No one app will run faster then the single fastest proccessor, but if I have a hundred apps running on a network of a hundred machines the load will evenly distribute out to all of them irregardless of were the app originated.
The great thing is that I can use dissimilar hardware, (as long as they are similar family off proccessors, such as the i686-based machines) The cluster will load balance based on what resources are aviable to each machine.
But that is mostly gimmicky nowadays. As network-wide filing systems develope, desktop-based clusters like that will become more and more practicle.(BTW this is NOT a Beowolf cluster, Beowolf is for massive parrellel proccessing and requires specially developed programs, Openmosix is for migrating threads and can use any normal unmodified app.)
However if you want TRUE thin client machines with no drives what-so-ever and want to boot them off of the network you should check out something like
Linux terminal project
Thin clients are proven technology, especially were Unix and X terminals are involved, since this unix stuff was originally designed to run in a mainframe/terminal enviroment. Unix itself is very flexible and adaptable, this is why it's the only OS to survive this long in various roles (mainframe, internet server, database server, desktops, etc etc)
What is wrong with thin clients isn't the cost of applications. (traditional Unix stuff is very expensive, but that's why linux is getting so popular), but it's that most people don't considure them or understand them, and PC hardware is stupid cheap.(in the good way)
If you are going to use PC-based hardware, why not make a PC desktop? Because sometimes it's easier and cheaper to use something with a centralized control. No need for super-complex windows AD-style domains and the administrative overhead, costs and unreliablities of PC-based networks and hardware.(realy LDAP-based domains, Novell is better anyways at doing stuff like that. Linux can do LDAP domains, too, if that's what you want.) Of course sometimes it's cheaper to run desktops, and is after all what everybody is familar with.