SQL Database Setup Question

saratoga172

Golden Member
Nov 10, 2009
1,564
1
81
Not sure if anyone here has SQL experience or can point me in the right direction, but I'm stuck trying to get something setup and my Gogle-fu is failing me.

I've got a single SQL database I'm using for a few applications. I'm looking to connect another application to the data but do not want to expose the database directly to the new application. I want to instead setup another server that contains a copy of the data (probably just the tables I need) and this second server will receive the requests from the new application.

The new SQL server would receive a nightly push of any data that has changed within those tables. At the same time the new server will be receiving some data periodically and it would need to stream that data to the 1st database within a few minutes.

I don't know that I want to use a replicate database because that would cause unnecessary strain on DB1 and would also mean DB2 is updated as DB1 is updated. I'm trying to restrict the amount of strain placed on DB1. By setting up DB2 I am lifting all of the hits for data from DB1 and placing that on DB2 (as well as leaving an active connection for the data coming in) for the new application.

I had initially thought of calling this a cache database because it would hold an amount of the data and be updated nightly.

Any thoughts on how I could set this up using SQL Server or any links you might have that would be helpful?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
0. Why does it need to have a separate DB? Are you implying that you just don't want to give it access to all the tables? What is the reasoning for adding redundant logical data, in another location, no less?

1. Are the row updates already timestamped?

2. If not, how hard would it be to add that to them (new column(s), default to now)?

3. How many tables, about how much data, and what kind of connection between them?
 

saratoga172

Golden Member
Nov 10, 2009
1,564
1
81
0. Separate db because I am trying to keep requests and the connection to the new app from hitting the current db. 2 biggest issues of concern are security and performance.

1. Row updates have time stamps so they would be looked at as if db1 has an update on these table copy them into db2. (This would be an overnight thing) the data coming from db2 -> db1 is unique and would be added as a new row to db1.

3. Not sure the amount of tables, but data would probably be in the 40-50gb range initially. The daily data changed probably less than 1gb. The db servers are hosted on the same san/vm environment so internal connectivity. 10gb hardware.
 

Merad

Platinum Member
May 31, 2010
2,586
19
81
What kind of traffic are you expecting where one DB wouldn't be able to handle it all?
 

nakedfrog

No Lifer
Apr 3, 2001
63,687
20,144
136
You could use a snapshot, or log shipping as someone else mentioned.
You could also create another database on the same server, set up views in that database that reference your primary database, and limit the permission on the user for the secondary application to only that database. Could accomplish the same thing without a new database too, a variety of ways.
 

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
0. Separate db because I am trying to keep requests and the connection to the new app from hitting the current db. 2 biggest issues of concern are security and performance.


Proof of performance issues? Sounds like premature optimization.

Security is a non-issue. If you have a DB account per user, then they will just keep their privileges and if you use 1 account to connect, just create another one for the new app with only the required privileges.

Back to optimization. If you actually have performance issues they could almost always be solved by fixing indexing or your queries. Also use caching. Most ORMs offer caching abilities or if the application in question is a web app you can actually used http caching (also of AJAX requests).

Another option is checking the hardware if the server. Replacing hdds with ssd or at least adding a ssd cache to hdds would probably also lead to a huge boost.

Wikipedia runs on MySQL + memcached. It's in the top 5 most accessed web sites. So performance issues IMHO really are not caused by too much access but a general issue in the App/DB.
 

saratoga172

Golden Member
Nov 10, 2009
1,564
1
81
Thanks for all of the responses. I've got my path forward with this.

Little more background:
-Boss didn't want anything connecting directly to the database
-Wanted a reason connecting directly wouldn't be a problem
-We have another third-party application that does connect directly and can occasionally cause slowness issues
-I spent the last part of last year working on optimizing the database because we had performance issues when I inherited the environment. Optimizations included archiving some data, rebuilding and updating indexes, removing unused indexes, etc. Didn't feel like adding more strain onto the system I spent time to resolve.


I am going to connect directly to our test environment and capture metrics so we can figure out the true requirements for performance and to ensure it stays running well.
 

beginner99

Diamond Member
Jun 2, 2009
5,320
1,768
136
Thanks for all of the responses. I've got my path forward with this.

Little more background:
-Boss didn't want anything connecting directly to the database
-Wanted a reason connecting directly wouldn't be a problem
-We have another third-party application that does connect directly and can occasionally cause slowness issues
-I spent the last part of last year working on optimizing the database because we had performance issues when I inherited the environment. Optimizations included archiving some data, rebuilding and updating indexes, removing unused indexes, etc. Didn't feel like adding more strain onto the system I spent time to resolve.


I am going to connect directly to our test environment and capture metrics so we can figure out the true requirements for performance and to ensure it stays running well.


OK. So a stupid boss it is. Just ask back why he doesn't want direct connection. Try to find out what his misunderstandings about the issue are. About optimization: Recreate statistics!!! Without proper statistics any big database will run like crap:

https://dba.stackexchange.com/quest...se-statistics-and-how-can-i-benefit-from-them

Then of course performance issue can lie within the app itself. Maybe it uses up the total amount of allowed concurrent connections and hence other apps have to wait till they timeout. Or just uses bad queries or queries stuff in loops or whatever. In case of doubt, blame the app first and then the database.

Other issues can be miss configuration if the database. Not enough allowed concurrent connections, not enough memory and so forth. If you have a 1 TB database and only give it 512 mb of RAM, it obviously will suck performance wise regardless of indexes.

Last put not least is actually hardware with focus and the storage system. More RAM made available to the Database system and using SSD's could also result in a huge boost. Also maybe your server is just 10 years old, yeah then also the CPU could be a problem.
 

uclabachelor

Senior member
Nov 9, 2009
448
0
71
Your boss's logic is backwards.

And your solutions doesn't exactly inspire confidence either. Not trying to be a dick... just telling the truth. From what I've read, your solution is a bandage to the underlying issues (and there are more than one).

-Boss didn't want anything connecting directly to the database

Why not? That's the whole point of a database.

-Wanted a reason connecting directly wouldn't be a problem

Well, let's see.... just about every business out there connects directly to a database.

-We have another third-party application that does connect directly and can occasionally cause slowness issues

Then you need to tell them to fix their application, stop running expensive queries, properly index your database, provide an API for the app, get faster hardware or more hardware resources (expensive!) or all of the above.

-I spent the last part of last year working on optimizing the database because we had performance issues when I inherited the environment. Optimizations included archiving some data, rebuilding and updating indexes, removing unused indexes, etc. Didn't feel like adding more strain onto the system I spent time to resolve.

Dealing with indexes is only a fraction of the optimization process. You will need to rewrite queries, split queries up, redesign the schema, or rewrite the application logic and its queries.