Thoughts about multiplayer game design.

Cogman

Lifer
Sep 19, 2000
10,286
147
106
I've been musing over the idea of making a multiplayer game. Part of my musing has lead me to consider the problem "How do support a large number of players without needing large server farms." Further, how could you make the game experience just as fun with 2 players as it is with 60 players.

The first thought I had about solving this problem was to shift all data processing on to the clients, keeping communication with the server at a bare minimum. However, I am then reminded of the "diablo problem", that is, with the clients doing everything, the clients can easily cheat and game things.

This led me to the consideration of doing a distributed server model, every client would act as a server, syncing with some main server and never handling their own processing. This seems like it might work, however it still runs a pretty high risk of cheating. Someone could join two computers, hack them to use each other as a server, and then have whatever fun they like. Or worse, they could send false syncing info back to the server. This would spoil things for everyone.

So my next thought came to a hierarchical setup. Players that have played fairly for the longest amount of time would be promoted to act as servers for other players. If something fishy goes on, those players would be demoted and someone else promoted. This would remove cheating all together, but it would significantly reduce it.

The problem with doing a model like this would be syncing. This could be reduced in a couple of ways. The first would be restricting which servers serve which sections of the game. Make it so that players who are close together use the same server for syncing. This runs the risk of one players computer getting bogged down if the players decide to all congregate into one location.

The other solution would be to employ a model similar to a system cache. Each player connects to servers based on lat and bandwidth available. When a player interacts in some way with a resource, one of two things could happen. If the server owns that resource, the action happens imminently. If the server does not own the resource, it communicates with the main server and lets it know "Hey, give me resource X." The main server then communicates to whoever currently owns the resource, tells them "Hey, person x needs this resource" and to which they finish up what they are doing and transfer all resource info to the second player server. Or, in order to avoid unnecessary transfer lag, the player could be linked to the owner of the resource and do whatever needs to be done. This means either higher amounts of data transfers for server players or increased lag for each of the players. Either way, it runs the risk of being very laggy when a given resource is highly contested (IE, two players communicating with two servers start shooting each other or something).

IDK, this is just my musings. I've been thinking "why can't games run more like bittorrent" and trying to figure out both a way to keep cheating down while at the same time leveraging each of the players processing power/network resources for the good of the game.

Any thoughts or ideas (or things I didn't consider)? How would you approach the problem of using unsecure systems as game servers?
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,708
4,666
75
Wrong distributed system to model. Not bittorrent: bitcoin!

To be honest, I'm not sure how bitcoin works behind the scenes. But in principal it's about trading objects of value, and not being able to copy them, without a central repository of knowledge about the objects. While new BitCoins have been generated, supposedly the old ones work just as well, and they're much easier to generate. Seems like such a system would work for a number of game objects.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
Wrong distributed system to model. Not bittorrent: bitcoin!

To be honest, I'm not sure how bitcoin works behind the scenes. But in principal it's about trading objects of value, and not being able to copy them, without a central repository of knowledge about the objects. While new BitCoins have been generated, supposedly the old ones work just as well, and they're much easier to generate. Seems like such a system would work for a number of game objects.

:) well, you could say bittorrent represents it because it is a model where each player gives away the rarest resources they own to their peers.... hmm. yeah, not quite right. Whatever :D. Just the thought of spreading the workload seems like it should be doable. So much so that I wonder why nobody does this.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
About time we had a good game design thread. I'm interested in why you are attracted to BitTorrent as a model for propagating shared game state? BitTorrent leverages the fact that files can be chunked up and spread out over a p2p net and a given chunk can hopefully be retrieved from a node that is close to the requester. There's no realtime propagation of state changes involved.

I would begin with thinking about what kind of work the server side of the architecture has to do. The bulk of its task is to accept state changes from clients and publish those out to other connected clients. If fifty or sixty players are in range of an event, they all need to find out that the event occurred within a maximum window of, say, 10-15ms.

Secondly, in some games it has to persist the state of the world very reliably. So in addition to publishing those state changes it has to use some strategy to flush them through to persistent storage.

Thirdly, it has to ensure the integrity of rules and values used in making gameplay decisions. It serves as an authoritative source for rolls, hit stats, timers, etc.

All three of these things seem like they would make up a very challenging problem for a decentralized architecture. Even if you don't have a persistent world the first and third issues are probably enough to make decentralization an unattractive strategy.

I would say, generally, that the kind of hardware you can run a multiplayer server on is cheap. Probably it's never been cheaper. Power consumption is important, but low-power servers can mitigate that somewhat, as can various cooling strategies. Bandwidth would be a significant infrastructure expense. If you have 1000 players on a server each requiring a 64kbps stream to keep in sync, then at the data center side you're looking at 64 million bytes/sec.

I think there are all sorts of interesting aspects of these tradeoffs that are worth discussing, but the discussion branches pretty early based on the type of game. If it's a persistent world game with rich interactions that use a lot of bandwidth I would be tempted to think in terms of geographically dispersed clusters. If it's just an mp fps then the database issues are nil, and the whole thing is about state propagation.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
About time we had a good game design thread. I'm interested in why you are attracted to BitTorrent as a model for propagating shared game state? BitTorrent leverages the fact that files can be chunked up and spread out over a p2p net and a given chunk can hopefully be retrieved from a node that is close to the requester. There's no realtime propagation of state changes involved.
Mainly what attracts me to it is the very concept of secured shared resources across the web. Now, bittorrent has a lot that it can do since the files are static (Hashes on the files). Which is a big problem with doing a distributed model for games.

I would begin with thinking about what kind of work the server side of the architecture has to do. The bulk of its task is to accept state changes from clients and publish those out to other connected clients. If fifty or sixty players are in range of an event, they all need to find out that the event occurred within a maximum window of, say, 10-15ms.
This is another thing that sort of attracts me to a shared resources model. The current models essentially say "Hey, there is 1 (or many) servers in these fixed locations, you get your stuff from there." Whereas with a shared model you could have a much more fluid and dynamic mapping or resources. Your latency could be cut by the shear fact that a trusted sender is physically closer than the main server.

Secondly, in some games it has to persist the state of the world very reliably. So in addition to publishing those state changes it has to use some strategy to flush them through to persistent storage.

Thirdly, it has to ensure the integrity of rules and values used in making gameplay decisions. It serves as an authoritative source for rolls, hit stats, timers, etc.

These are big issues that also introduce the potential of security vulnerabilities. Yanking from my knowledge of how cache coherency works, some of them can be resolved in a similar fashion, however, it means you have to be very specific on who each player is communicating.

One potential method for flushing through state changes might be to assign trusted players equidistant zones/resources (whatever they may be) and permutate changes based off of distance from those zones. Doing this with the idea that something that occurs on the other side of the map doesn't have to be registered immediately from the other side of the server.

All three of these things seem like they would make up a very challenging problem for a decentralized architecture. Even if you don't have a persistent world the first and third issues are probably enough to make decentralization an unattractive strategy.

I would say, generally, that the kind of hardware you can run a multiplayer server on is cheap. Probably it's never been cheaper. Power consumption is important, but low-power servers can mitigate that somewhat, as can various cooling strategies. Bandwidth would be a significant infrastructure expense. If you have 1000 players on a server each requiring a 64kbps stream to keep in sync, then at the data center side you're looking at 64 million bytes/sec.
Yeah, and that I think, is the big limiting factor to expanding single dedicated server strategy past 64 players could be pretty tricky. This is the main bottle neck that I would want to avoid doing a distributed model.

I think there are all sorts of interesting aspects of these tradeoffs that are worth discussing, but the discussion branches pretty early based on the type of game. If it's a persistent world game with rich interactions that use a lot of bandwidth I would be tempted to think in terms of geographically dispersed clusters. If it's just an mp fps then the database issues are nil, and the whole thing is about state propagation.
:D What really got me thinking about it was how crappy the minecraft networking code must be. As far as I can tell, it does everything through the central server. However, after maybe 10 players join a server, things start going haywire. Giant swaths of the map don't get sent out, doing anything in the game becomes impossible, and it just becomes a bad experience.

I was thinking, maybe a large portion of that is the bandwidth problem. I have little doubt that minecraft has to push a fairly large amount of data which is why it sees such glichy behavior when too many people join.

Anyways, this is why I started this thread. I'm really curious to see if this sort of approach (or if anyone else has a novel approach) could work for a game environment. I don't expect it to be easier than a straight forward server/client environment.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
Yeah, ease of implementation isn't even in the picture yet. It's more about feasibility.

You mentioned 64 players, and I'm not sure if it was because you misread what I typed, but anyway I gave this some more thought, and it seems to me that for a small scale FPS (32 or 64 players on a constrained map) there is really no reason to think about a p2p model. The resource demands just aren't that high. FPS games supporting dedicated servers and up to 64 players can run on a single box with a fast cable connection (4 mbps if they all need 64kbps streams).

So this discussion is probably only relevant to the "massively multiplayer" genre, say 1000+ players in a single shared world instance.

In that context it's evident that you need some sort of centralized controller. You need authentication, and you need some game-wide broadcast and control capability in order to manage the game and population. Probably just for security reasons you would also have this centralized controller handle rule checks, rolls, hit tests, etc.

In terms of what might be decentralized, that leaves the state updates within a shared game space. Every time I have thought about this subject I have ended up with the concept of a player in the middle of a circle of observable phenomena. The player can observe and potentially interact with anything inside the circle. The player doesn't know about and can't interact with anything outside the circle. The size of the circle is dependent on draw distance, "zone" boundaries, whatever, but that moving envelope defines the amount of stuff the game has to have status updates on in order to show that player what's happening in the part of the world he can observe.

If the game is chunked into zones, then perhaps we can think of a model where, for example, some number of clients in a given zone are acting as the database of record, replicating between themselves, and negotiating hand-offs when one of them leaves. At the same time all the clients would still be communicating with a central server as described above, and for other purposes such as getting lists of what people are in other zones, mail messages, etc.

If the game isn't chunked into zones (has anyone other than Vanguard attempted a truly zoneless MMO?) then this scheme gets more complicated. What represents a shared space? You could say all the clients whose circles of observability overlap, and then form ad hoc groupings as players move around... but it's too late and this idea makes my brain hurt.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Unfortunately I'd say you can't trust the client machines to act as enforcers or even relayers of state -- besides cheaters, there would also be griefers who would use clients that pass corrupted state to their peers either as another form of cheating or just for "fun."

The idea of trusting the client programs of some users to do server-side work also breaks down if their program or their account is compromised. It's a common problem in WoW to have an account of a "good" user stolen by phishing or malware, then have it used for gold farming and/or have its gear sold off.

Edit: one idea that I could see possibly working is more like folding at home and other "work unit" applications, if the client program was set up to accept some sort of dynamic chunk of code to run that changes often and isn't necessarily usable by itself so that it would be harder for cheaters, griefers or farmers to have a client written to tamper with the work.

The central server could send work units, get back results and check their integrity, perhaps by sometimes sending the job to multiple clients to check for clients that are tampering with the work.

But I doubt writing the code to do that would make more economic sense than just using dynamic server instancing to scale up the server side processing as needed. The overhead is probably also too high for it to make sense to use clients that way for just a game.
 
Last edited:

tatteredpotato

Diamond Member
Jul 23, 2006
3,934
0
76
These type of design decisions can make or break a game. For example, I played a fair amount of Gears of War on the Xbox 360 when it came out. One "feature" of Gears is that they do hit detection on the server (kind of the opposite of your approach, they wanted to move more processing to the server). What this caused was 1) obscene host advantages (since hosts weren't dedicated servers, but rather just one player in the game), and 2) really annoying lag issues. For example, if you wanted to shoot at a player, you would have to lead the player by an amount proportional to the lag you have with the server.

Also games will game any system you put in place. People playing gears would patch their Xboxes through a PC and do some rather tricky things with controlling network traffic to their box in order to fool Epics heuristics into thinking they were the best candidate for host.

Also I think it's very difficult to talk about a specific architecture without some requirements. For example FPS games require very low latency, so anything requiring more than one hop will likely be too slow. Other games like RTS or MMO can tolerate a bit of lag and could perhaps facilitate an interesting network architecture. Personally I feel the only area that would need much would would be the MMO space, since most popular FPS or RTS games just have players host their own servers.
 

PingSpike

Lifer
Feb 25, 2004
21,758
603
126
This is a pretty interested discussion. You mentioned minecraft runs poorly. I wonder how he does the updates on that. I was under the impression he has the world broken into chunks. It seems like you'd want to update only the chunk the player was in and all adjacent chunks, and then perhaps update all other chunks less often. You wouldn't have to update far off chunks until the player could see them.

I see the goal you're getting at though. Leverage client's bandwidth to allow scaling without additional resources (bandwidth) needed at the server. I just don't see how you can trust the clients though. They will find a way to cheat (they do with the current models used!) and even if they didn't they are unreliable so you'd have to store the state somewhere else as well in case they lost connection unexpectedly.

And what would happen if a large number of players congregated into the same space? Wouldn't all the clients reporting to their "sub servers" then have their sub servers reporting to the master all at once? You'd still need the bandwidth capacity of the traditional client server model in that case.

I sort of thought about this once because I was trying to make AI for a game mod for a bunch of zombies. With zombies of course you want tons and tons of them. To reduce the updates required I had some crappy code that would destroy them on clients that were far from them to avoid updating them when no one was around. The idea was the AI would run on the server and they would all exist there but pop in and sync only when within range of a client. I still envisioned a worst case scenario of the clients all spread out and having to update all enemies in the game anyway though.
 

tatteredpotato

Diamond Member
Jul 23, 2006
3,934
0
76
This is a pretty interested discussion. You mentioned minecraft runs poorly. I wonder how he does the updates on that. I was under the impression he has the world broken into chunks. It seems like you'd want to update only the chunk the player was in and all adjacent chunks, and then perhaps update all other chunks less often. You wouldn't have to update far off chunks until the player could see them.

My understanding is that the world is broken into "chunks" that are 16x16 areas that extend from bedrock to the top of the world (so 16x16x128 block area). If you've played any MC you will on occasion encounter "world holes" where a certain chunk just doesn't get drawn on the client, and thus if you walk into it the client will try to fall, but the server keeps correcting the client because it's aware of the geometry there.

EDIT: I remember an article I saw on HN about MC netcode: http://corbinsimpson.com/entry/take-a-bow
 

brandonb

Diamond Member
Oct 17, 2006
3,731
2
0
Cogman,

Every single game I've ever seen which used a peer to peer system ultimately failed in some manner. Either due to lag, out of sync problems, cheating, and other oddities.

For example:

Falcon4 (from 1998) used a peer to peer system for its multiplayer. But one thing that people ran into almost immediately is that the network ultimately ran at the lowest common denominator. If you had people using DSL/cable to transmit data, and one on the link was a dial up user, the network had alot of trouble staying in sync and staying stable due to the fact that some were fast, and others slow. Usually those DSL/cable players were forced to run at dialup speeds to try to stay stable, but then everybody would be forced to downgrade their connection to the least common denominator. When the source code was leaked for that game, the first thing people did was rewrite the multiplayer to be client/server.

In my opinion and my experience, the simple client/server model is probably best for video gaming. Peer to peer doesn't matter if you are transferring files like bit torrent, etc where the speed of the peers is allowed to fluxuate and QoS doesn't matter. But games are more precise.

Quake used a model of sending out updates every 100ms in their servers (if I remember right), so 10 packets/updates a second lead to mostly fluid gameplay. So, the servers are balanced to be able to process and send out those updates within the 100ms window. If it took 10ms to process a player, obviously the max number of players the server could handle was 10... So, its sort of a balancing act.

Asheron's call MMO uses a normal "command" or script type interface. If you moved into a new area and there was a new monster in the area, it would send the client a "CreateObject" method with the simple details, such as the monster name, location and a GUID. If the monster attacked the player, it would send a message like "MonsterAttack(Guid of monster, Guid of player, damage)" etc and the client would act upon the command from the server. Instead of the 100ms heartbeat from quake, it was more loose. What would happen is that if too many things were going on within the bubble or surrounding area, the client would lag, and the server would create a "portal storm" which sort of just randomly teleported a player in a random direction. It used a bubble, or radius around the player.

Everquest segregated multiplayer to zones, but has no limits on the zone, I've seen 500 players in a zone. What it did, was essentially place a player in a bubble. When network traffic on the server is light, it transmits the surrounding actions of the player out to a certain radius of like 500 (so within 500 meters you can see things moving around etc), but when you have high network traffic, it reduces the radius to maybe 100 so as more players entered the area, less info would be send to each player to keep things equalized. So it used a bubble routine that changed size.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
147
106
This is a pretty interested discussion. You mentioned minecraft runs poorly. I wonder how he does the updates on that. I was under the impression he has the world broken into chunks. It seems like you'd want to update only the chunk the player was in and all adjacent chunks, and then perhaps update all other chunks less often. You wouldn't have to update far off chunks until the player could see them.
The way it does chunk updates are somewhat bizarre. It seems to either have issues realizing that a chunk wasn't updated correctly, or it has issues with the priorities of updates. Either way, it is often the case that you will walk along an all the sudden see a huge gaping hole in the middle of the ground (even with few players). This is something I'd like to avoid.

I see the goal you're getting at though. Leverage client's bandwidth to allow scaling without additional resources (bandwidth) needed at the server. I just don't see how you can trust the clients though. They will find a way to cheat (they do with the current models used!) and even if they didn't they are unreliable so you'd have to store the state somewhere else as well in case they lost connection unexpectedly.
Unreliability could be somewhat mitigated by the fact that a central server keeps track of what is going on. Yes, it will "glitch" when a player leave uncleanly as the client will have to send a request to the server basically saying "ok, who else can I talk to."

Cheating, as you said, is an issue even with centralized servers. I don't think you could ever eliminate it completely. That being said, doing a trusted player sort of setup might ease the sting a bit.

And what would happen if a large number of players congregated into the same space? Wouldn't all the clients reporting to their "sub servers" then have their sub servers reporting to the master all at once? You'd still need the bandwidth capacity of the traditional client server model in that case.
Yeah, and I don't see much of a way around it either. Whats worse, if that sub server crashes that all the clients have congregated on, then it would be a tanglely mess to get things setup and running again from the mains servers standpoint.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
The more I think about it the more convinced I am that you're trying to solve the wrong problem. If a game succeeds then the costs of the infrastructure can be borne, whatever architecture is chosen. I think the problems that need solving are the ones that affect gameplay, and those solutions should drive the architecture decisions. So you want the lowest possible latency, the most secure communications, the richest amount of player interaction, etc. Personally I don't think any p2p-oriented architecture will lead down a path toward solutions for those problems.

If I were setting up an MMOG today I would maybe head in completely the opposite direction: build a single air-conditioned room, pull in a few gbps of fat fibre, and stick an IBM Z-series in the middle. Then I can spit out as many virtual servers and databases as I want and make the whole thing completely dynamic based on load.

Only half-kidding :).
 

Schmide

Diamond Member
Mar 7, 2002
5,745
1,036
126
My thoughts on multiplayer, cheating, griefing, and decentralization.

In terms of latency, I think whatever processing you can offload onto the users machine should only make response times better. A non trip is quicker than a single trip which should be faster than a round trip. Although logic requires a round trip to finalize any deterministic move in a game, removing non events from the communication stream should free communication resources for other activities. If one's own machine can determine the action was a miss, there is no need to validate and synchronize such a non event. However, this loads the amount of communication needed for validation of an event. Of course there needs to be some standard update for the natural movement in the game.

If a user's machine determines that an event affects global objects (i.e. a hit), there needs to be some auditing of this action. This requires either a centralized sever or some 3rd party processing to finalize the action. With a client/server relationship you have a trusted source for the validation, with peer to peer you have untrusted sources but you could form some trust by comity.

Implications of trust by comity. If someone is cheating, they are either bending the logic of the game or the input to the game. If they are trying to bend the logic of the game, a simple redundant calculation by a few computers could form a consensus on the action. As long as most of the computers are not cheating, the bad logic should be rooted out quickly, especially if random peers are chosen for such audits.

Input cheating and what can be done to catch this? Regardless who does the audit, looking for disparities in movement, extreme in-human reaction times, ghost objects*, or statistically greater than average scoring could detect nefarious players. Whatever final judgement is made, is far beyond the net code.

* a ghost object is an object that exists in the hit code but is invisible to the user or shows up for a less than perceivable time frame. Hitting it more than randomly or reacting to it should be seen as a positive test for cheating.

Griefing could be addressed with caps on any single user's communication and 3rd party audits. Robust net-code should be a foundation for preventing this.
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
147
106
The more I think about it the more convinced I am that you're trying to solve the wrong problem. If a game succeeds then the costs of the infrastructure can be borne, whatever architecture is chosen. I think the problems that need solving are the ones that affect gameplay, and those solutions should drive the architecture decisions. So you want the lowest possible latency, the most secure communications, the richest amount of player interaction, etc. Personally I don't think any p2p-oriented architecture will lead down a path toward solutions for those problems.

If I were setting up an MMOG today I would maybe head in completely the opposite direction: build a single air-conditioned room, pull in a few gbps of fat fibre, and stick an IBM Z-series in the middle. Then I can spit out as many virtual servers and databases as I want and make the whole thing completely dynamic based on load.

Only half-kidding :).

My thinking is that if you are going to do a distributed model, you pretty much have to do it from the start. Doing the distributed model and then deciding later "well, I guess I should go this way" pretty much will never happen.

brandonb's mentioning on games that are decentralized is interesting. I didn't know that it had been attempted before.

What I think would be cool would be a totally deceneralized approach. I think it is a shame that some MMOs die simply because the company decides not to support it anymore. Wouldn't it be great if MMOs could only die when nobody plays them?

That is the sort of thinking that is going on behind my hair brained ideas. I would like to see games that can't be easily killed.

That being said, when/if I make a game, it will probably have simple/no multiplayer support :D. However, that won't stop me from toying around with new ideas like this.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
What I think would be cool would be a totally deceneralized approach. I think it is a shame that some MMOs die simply because the company decides not to support it anymore. Wouldn't it be great if MMOs could only die when nobody plays them?

That raises other interesting issues. Most operators make you agree to a license and TOS that allows them to deny you access to the game under certain circumstances. Is that meaningful when most or all of the game runs locally? What if all they control is authentication, they decide to turn that off, and someone implements a replacement? The company would have no control over their "service" at that point. So anyone could do anything to the binaries, network traffic, whatever, and there would be no penalty for misbehavior.

Which is why I raised the point about gameplay driving architecture choices, which isn't the same thing as changing architectures. Right now there seem to be few positives in favor of a decentralized approach, lots of negatives, and no clear driver for things to move in that direction.

In fact, on a more general level, the world seems to be moving back in the other direction. When I started in this business fat servers and thin clients were the norm. Then we moved to client-server and peer-to-peer models where there was a lot of computing and display power available at the leaf nodes of the system. Now things seem to be swinging back, with more and more people using relatively low power mobile devices, and "applications" living primarily on remote servers.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
That raises other interesting issues. Most operators make you agree to a license and TOS that allows them to deny you access to the game under certain circumstances. Is that meaningful when most or all of the game runs locally? What if all they control is authentication, they decide to turn that off, and someone implements a replacement? The company would have no control over their "service" at that point. So anyone could do anything to the binaries, network traffic, whatever, and there would be no penalty for misbehavior.

Which is why I raised the point about gameplay driving architecture choices, which isn't the same thing as changing architectures. Right now there seem to be few positives in favor of a decentralized approach, lots of negatives, and no clear driver for things to move in that direction.
Yeah, that would be a sticking point. The only way really around it is to leave the traditional subscription based model of current MMOs (as far as the money goes). You would pretty much be forced into making your profit from selling the game itself.

As for the positives, it would be possible to have non-company supported games be large and diverse. Bigger communities usually lead to more interesting game dynamics.

In fact, on a more general level, the world seems to be moving back in the other direction. When I started in this business fat servers and thin clients were the norm. Then we moved to client-server and peer-to-peer models where there was a lot of computing and display power available at the leaf nodes of the system. Now things seem to be swinging back, with more and more people using relatively low power mobile devices, and "applications" living primarily on remote servers.

Too true. I've thought the whole notion of cloud computing is pretty funny; it really is just looking at today's model and saying "what did we do 30 years ago?".
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Too true. I've thought the whole notion of cloud computing is pretty funny; it really is just looking at today's model and saying "what did we do 30 years ago?".

True, but now with literally 10,000+ times the bandwidth, server power, remote storage we had in the 80s when 1200 baud was a major upgrade from my first 300 baud modem. That makes server-based designs possible that wouldn't have made sense even 5 years ago.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
That definitely is interesting, especially the story of the sync bug, which more or less mimics my experience of finding any intermittent memory crash in a C++ program. In this particular game context (thousands of units per game, only 6-8 humans playing) the approach makes perfect sense.
 

Cogman

Lifer
Sep 19, 2000
10,286
147
106
That definitely is interesting, especially the story of the sync bug, which more or less mimics my experience of finding any intermittent memory crash in a C++ program. In this particular game context (thousands of units per game, only 6-8 humans playing) the approach makes perfect sense.

What I thought was that he mentions Halo uses a similar model. It looks like this is something that is doable even with FPS type games (though, I imagine the complexity rises significantly)
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
What I thought was that he mentions Halo uses a similar model. It looks like this is something that is doable even with FPS type games (though, I imagine the complexity rises significantly)

Yeah the problem with FPS games is that his simtick can't be 100ms. That's probably an order of magnitude too slow. The local client has to react immediately to player input (or the players won't play), and if that happens in between simticks then you could have two clients running hitbox calculations on targets that aren't in exactly the same place.
 

iCyborg

Golden Member
Aug 8, 2008
1,355
63
91
Good article. I can see this working for RTS where each player can have lots of units, but actively managing only a couple of them, others being driven by the game engine. And where latency isn't critical. But for FPS/MMORPG you only have one unit (yourself) so IMO it's better to just send some sort of state vector (position/orientation/weapon selected/action etc) every X ms and interpolate in some smart way. At least you don't have to worry about desync problems since you are synced every time you receive a packet. It works great for racing games where you can predict pretty well where a car will be in the next 100-200ms based on the current state.

Things are tough with 1000+ players. In a pure client/server model with all the state updates sent from everybody to everybody else, bandwidth requirements rise linearly for clients and O(n^2) for servers, so beastly servers should be able to keep up for n in thousands, especially if they can prune some updates for parties not in contact. Although there's always a chance that many will be at the same place. It might not be a small chance either, like a big battle a la Total War with 1000 vs 1000 in an open field...