OneOfTheseDays
Diamond Member
Here is the problem in a nutshell. We have a server that processes requests. For each request there is a data array that gets allocated to hold the results information of that request. This data array could be anywhere from 0 to 4000000 size in doubles. When users repeatedly make requests that require us to create large data arrays we quickly run into out of memory exceptions.
To combat this problem we thought about creating just one data array sized 4000000, the max size, and re-using this for every single request. This works fine for requests that are in process, but because we allow remoting (i.e. clients can make requests from remote computers), it is not practical to transfer a 4000000 size double array for every request no matter how small the actual data may be.
I've been racking my brain to find a compromise to this issue but am deadlocked at the moment. If anyone could provide some ideas or insight into a similar situation or solution they've worked on in the past that would be greatly appreciated. Thanks.
To combat this problem we thought about creating just one data array sized 4000000, the max size, and re-using this for every single request. This works fine for requests that are in process, but because we allow remoting (i.e. clients can make requests from remote computers), it is not practical to transfer a 4000000 size double array for every request no matter how small the actual data may be.
I've been racking my brain to find a compromise to this issue but am deadlocked at the moment. If anyone could provide some ideas or insight into a similar situation or solution they've worked on in the past that would be greatly appreciated. Thanks.