• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Memory Management Issue

OneOfTheseDays

Diamond Member
Here is the problem in a nutshell. We have a server that processes requests. For each request there is a data array that gets allocated to hold the results information of that request. This data array could be anywhere from 0 to 4000000 size in doubles. When users repeatedly make requests that require us to create large data arrays we quickly run into out of memory exceptions.

To combat this problem we thought about creating just one data array sized 4000000, the max size, and re-using this for every single request. This works fine for requests that are in process, but because we allow remoting (i.e. clients can make requests from remote computers), it is not practical to transfer a 4000000 size double array for every request no matter how small the actual data may be.

I've been racking my brain to find a compromise to this issue but am deadlocked at the moment. If anyone could provide some ideas or insight into a similar situation or solution they've worked on in the past that would be greatly appreciated. Thanks.
 
Sounds like you just need to pick an appropriate data structure. Figure out what kind of access you need to the data, like linear, ordered or random access, then pick a structure that has the best memory efficiencies for that kind of access.

For instance, if you know you will always be accessing every element of the structure in a linear fashion then a LinkedList would be appropriate. If you are going to need to search through the elements to find the correct ones first think about using a HashTable or even a Tree structure.

Arrays are great for simple tasks, but on larger applications where large amounts of data are being used optimizing your data structures to suit your needs is a MUST.
 
Thanks Crusty, a new data structure was just what the doctor ordered. We are using an in-house created data structure, similar to an arrayList but without the peformance penalties, and it's all gravy now.
 
It's very similar to an arrayList in terms of functionality (you can dynamically resize it), but it doesn't have the object boxing/unboxing performance hit since it supports types. Also, the underlying data is stored in subarrays whose max size is 999 so they never get allocated onto the LOH.
 
Originally posted by: OneOfTheseDays
Here is the problem in a nutshell. We have a server that processes requests. For each request there is a data array that gets allocated to hold the results information of that request. This data array could be anywhere from 0 to 4000000 size in doubles. When users repeatedly make requests that require us to create large data arrays we quickly run into out of memory exceptions.

To combat this problem we thought about creating just one data array sized 4000000, the max size, and re-using this for every single request. This works fine for requests that are in process, but because we allow remoting (i.e. clients can make requests from remote computers), it is not practical to transfer a 4000000 size double array for every request no matter how small the actual data may be.

I've been racking my brain to find a compromise to this issue but am deadlocked at the moment. If anyone could provide some ideas or insight into a similar situation or solution they've worked on in the past that would be greatly appreciated. Thanks.

That is an extremely bad architectural solution.
You have to consider if you getting repeated requests with same params, client-side caching, server-side caching, data usage patterns, computation distribution and so on.
provide more info, there may be few suggestions.
 
Back
Top