shared disks seems to be the norm for OLTP clusters.
shared nothing seems to be the norm for DSS clusters.
What happens in the time of mixed workload?
How you draw a boundary line between few clusters
of SMP's ( 4-8-16 cpu machines )
and lots of clusters of 2 cpu machines ( like blade
clusters)
Assuming 32 bit memory addressing is a bottleneck.
Remember it is not a hypothetical statement but
one with real applications in mind and the real cost
of the scaling for increased workloads.
we would also need to consider the availablity
and robustness of the architectures.
since blade servers are cost effective but not
high availablity and vice-versa with SMP's.
I have real doubts about scaling up an SMP
( say 8 cpu to 16 cpu ) other than real issues
with the cost.
remember this SMP is only one of the few in
the cluster of SMP's.
I am not looking fo any quick fix answers but looking
for a real discussion where the industry is moving
with respect to the scalability issues it faces now and
the way businesses want fixes to their problems within
their budgets.
also remember the cost of fast interconnects or crossbars
between the servers in terms of latency and dollar costs.
In addition many of blade servers which have been coming
to the market since 2003 since do have have enough bus
bandwidhs for fast interconnects except MAYBE opteron
( assuming most blade servers are x86 architecttures )
Is there blade servers based on PowerPC ?
I think the things which tilt in favour of blade servers are
1) very fast interconnects.
2) high bus throughput and low latency in memory
and buses.
3) 64 bit extensions.
4) robust OSes built on 64 bit extensions.
Combination of all these would prove a tough competitor
to the SMP's. ( yes! the price too is in favor of blades )
However what about the high availablity issue? Just
for keeping high availablity do u think one tries to keep
a whole lot of redundancy built in in terms of the large #
of blade servers? with unneccessary increase in cost and
maintenance?
shared nothing seems to be the norm for DSS clusters.
What happens in the time of mixed workload?
How you draw a boundary line between few clusters
of SMP's ( 4-8-16 cpu machines )
and lots of clusters of 2 cpu machines ( like blade
clusters)
Assuming 32 bit memory addressing is a bottleneck.
Remember it is not a hypothetical statement but
one with real applications in mind and the real cost
of the scaling for increased workloads.
we would also need to consider the availablity
and robustness of the architectures.
since blade servers are cost effective but not
high availablity and vice-versa with SMP's.
I have real doubts about scaling up an SMP
( say 8 cpu to 16 cpu ) other than real issues
with the cost.
remember this SMP is only one of the few in
the cluster of SMP's.
I am not looking fo any quick fix answers but looking
for a real discussion where the industry is moving
with respect to the scalability issues it faces now and
the way businesses want fixes to their problems within
their budgets.
also remember the cost of fast interconnects or crossbars
between the servers in terms of latency and dollar costs.
In addition many of blade servers which have been coming
to the market since 2003 since do have have enough bus
bandwidhs for fast interconnects except MAYBE opteron
( assuming most blade servers are x86 architecttures )
Is there blade servers based on PowerPC ?
I think the things which tilt in favour of blade servers are
1) very fast interconnects.
2) high bus throughput and low latency in memory
and buses.
3) 64 bit extensions.
4) robust OSes built on 64 bit extensions.
Combination of all these would prove a tough competitor
to the SMP's. ( yes! the price too is in favor of blades )
However what about the high availablity issue? Just
for keeping high availablity do u think one tries to keep
a whole lot of redundancy built in in terms of the large #
of blade servers? with unneccessary increase in cost and
maintenance?