• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

data pull stops after specific number of rows - help

rh71

No Lifer
EDIT>> resolved below.

I'm using a vb script to pull data from an MS SQL Server db. The script contains a main SQL statement that is currently supposed to pull back 1800 records in total. Then in a loop, for each record it pulls back, it runs other SQL against it to check various things, then writes the row to a different table (for me to operate on later).

Where I'm seeing the trouble is that everytime I run the script, it seems to stop after the 388th, 389th, 390th row, randomly. There is no SELECT TOP ___ " in the statement at all. I tried changing the main SQL statement to have it pull data back in different order, and it still fails on the 388-390th row it tries to write, so it doesn't appear to be a data issue. To describe it in more detail, it could run VERY fast getting to the 387th row, then stall 15 seconds, then be able to write that row - then it tries the next one and after another 15 seconds it errors.

The error returned is "SQL Server does not exist or access denied” and it points to lines where I execute the SQL checks (and at different lines of execution randomly). I have also seen it simply say timeout on a couple occassions. I found this article on that error: http://support.microsoft.com/kb/328306 but it's like finding a needle in a haystack with all the various causes.

Has anyone ever come across such an issue where it's allowing only a certain number of db reads/writes and it may apply here? I find it odd that it's tripping up right at that point nearly every time regardless of the data it's seeing. I suspected a network issue but I even ran it locally on the server and the same thing results. BTW, I say "nearly" every time because one time it pulled back and wrote all 1800 rows - just once in 2 days of testing though, with the same code.

This similar code structure ran fine against another server and its db so perhaps it's a setting somewhere someone knows about?
 
Last edited:
Are you trying to access the sql server remotely?

I've been testing mostly remotely because that's also the production environment. Previously, this similar script was working when:
- server A intiated (via scheduled tasks) script on server B, against server B db.

Now I need another script against a db on server A but it's not working:
- server A intiated script on server B, against server A db
nor is it working when the script is local:
- server A initiated script on server A script, against server A db

I basically remote desktop'd into server A and ran it locally via cscript and it's still erroring the same exact way.
 
Last edited:
Sounds like you might be running out of connections. Is the server fully licensed? Some dev versions can be limited in the number of simultaneous connections you can complete.
 
Sounds like you might be running out of connections. Is the server fully licensed? Some dev versions can be limited in the number of simultaneous connections you can complete.

In the script I've only done one connABC.open before the loop and used connABC.execute the rest of the way for the SQL checks and for the writes, I use set oCMD.Activeconnection=connABC up top ... and then oCMD.execute within the loop. Does that still mean multiple connections? I even close the connection at the end but it doesn't really get to that point.
I just tried removing the inserts altogether and it still hangs prematurely and errors.

I will have to ask about the server license.

EDIT>> I can see the server properties and under connections, it is set to 0 = unlimited. The server is also set to allow remote connections, with a timeout of 600 seconds. It usually fails on me WAAAAAY before that. More like 45 seconds.
 
Last edited:
In the script I've only done one conn.open before the loop and used conn.execute the rest of the way. Does that still mean multiple connections? I even close the connection at the end but it doesn't really get to that point.

I will have to ask about the server license.

It shouldn't open multiple connections, but what Mark is alluding to is that since it is a production environment, between your connection and other clients' connection, the SQL Server Connection Pooler is running out of connections. The situation is exacerbated if your DB is hosted on the developer edition of SQL Server because the Connection Pooler is limited to like five connections.

Regardless, I think running this from within a script is a bad idea. What you're doing is you're pulling the initial data and then [I presume] leaving the connection open while you work on those 1800 rows. If you're timing out, chances are other production clients connected to SQL Server may also be timing out.

Have you looked into utilizing ADO.NET for this? An offline DataSet type of object seems ideal for this task.
 
Sounds like its not connection limits then. Any chance of a deadlock? I was leaning the same way as Oyster, i.e. why are you looping through one result set and using the same connection to call back into the server. But I assume the first query must be completed and the connection closed, otherwise I believe you would get an exception on the client side when you tried to execute the query inside the loop.

Bottom line for me is: you're getting "not found/access denied" sporadically, so I think that likely indicates something happening on the server, rather than the client. You said it worked reliably on a different server. That's the signature of a classic configuration issue, so I would go through the properties of both servers with a fine-toothed comb.
 
Sounds like its not connection limits then. Any chance of a deadlock? I was leaning the same way as Oyster, i.e. why are you looping through one result set and using the same connection to call back into the server. But I assume the first query must be completed and the connection closed, otherwise I believe you would get an exception on the client side when you tried to execute the query inside the loop.

Bottom line for me is: you're getting "not found/access denied" sporadically, so I think that likely indicates something happening on the server, rather than the client. You said it worked reliably on a different server. That's the signature of a classic configuration issue, so I would go through the properties of both servers with a fine-toothed comb.

Yep, this is what I was thinking. I didn't originally think that it was a connection issue, but probably something with a maximum data transfer. Or even, something along the lines of a maximum connection time.

Whenever you are working with slow media, best practice is usually to open a connection to the media, get all the data you need from the media (preferably in one step), close the connection, do whatever data manipulations you need on the data, and then opening up a new connection to submit the changes.
 
well thanks for the insight all, the fix... get this... used a small delay (.25 seconds) between each write in the loop. Apparently it was "concurrent" connections that closed it up...
 
Back
Top