Depending on how you want to do this, there could be numerous possibilities. I hope some of what I've said below will make sense...
To start off with, I personally found
this book to be super useful. But it was an easy read because I was already developing in SSIS by then. Plus, I received training first-hand from M$ guys when SSIS was launched. In any case, I heard there is a Professional SSIS 2008 book, but I haven't read it. So I don't know how it is. YMMV.
Let me make one thing clear: none of the articles, books, etc. I have read have helped me with the SSIS pet peeves - as I mentioned earlier, since SSIS is too designer-driven, you can't do much when you get a cryptic error message. So most of your initial work will involve trial and error. That being said, since you're using 2008, you won't have to deal with some of them. M$ has resolved many Connect issues - nevertheless, not all

. Additionally, it is too bad that you won't have to deal with some of the issues that 2005 developers had to. Simply put, 2008 is morel like an SP1 to most of us! The most significant changes include the ability to write C# script code and enhancements to the Lookup task (you'll be using this!). They have added new things like the ability to cache lookups on disk... I haven't used these, but from what I have heard, they perform much better.
The following is the strategy I use when I work with new SSIS projects:
1) Create a base package which hosts your universal connections and package-level variables. There is the notion of parent and child packages in SSIS, but stay away from it to keep things simple. There no "base" classes so to speak in SSIS - having a base package is super useful... next time you have to create a new package, you just copy-paste the base and change the package ID (very important). Package ID is simply a GUID. You can add comments explaining the logic of your packages (I'm sure you've seen this in some of the samples above). I'll comment the crap out of my base package - metadata is "my precious."
2) In the base package, I'll also configure common components - like an SMTP Email task on the OnFailure event which notifies the admin when the package fails.
3) All the connections in the base package will be mapped to package variables. For example, MyDBConnection of type ADO.NET will be mapped to a variable called MyDBConnectionADONET. The reason I do this is because of how SSIS raises events (learn the event cycle!). One of the things SSIS does upon package startup is load package configurations and validates metadata. Which takes me to the next point...
4) As soon as you create a project in the IDE, also create and associate each package with its own XML config file. Additionally, just as we created a base package, also create a base configuration file (this can hold, for example, all the DB connections that are universal to the project). So, each package, overall, will have two config files - SSIS is wonderful that way - the common and the package-specific.
5) The configuration files can be configured (no pun intended) using the GUI - here is where you map the variables (step 3 above) to a specific config key. When a package runs, SSIS will load the config file, initialize the variables, which in turn, will be used by your connections. I know this all sounds convoluted, but believe me, once you figure this out, you'll be a SSIS-master (not

).
There are many other settings involved for running a package from SSAgent. Once you create the package and are able to run it locally, getting past the SSAgent will be a POC.
I would seriously suggest you understand all the tasks (you won't use all of them) before you start using them. Specifically w.r.to your scenario, here is another pet peeve and a way around it which I can speculate you'll end up using. I almost brought down a server because I did not know what the heck I was doing - once the system went live, and I looked at the Profiler, I knew my [huge] mistake. Anyway, I digress...
1) You can't call a SPROC from within a DFT (Data Flow Task) in SSIS by default. This is not SSIS's fault per se, but a very huge limitation if you want to do things like Lookups, manage SSDs, Pivots, Merges, export to tables, etc. As you can see, the list pretty much includes the "core" tasks involved in an ETL window. SSIS validates the metadata (column types, names, etc.) at package startup and caches it for obvious performance-related reasons. By default, SPROCs don't convey metadata over the wire - so your OLE DB Source in the DFT can't see the columns and you can't develop. Unless...
2) You add a simple SET FMTONLY OFF statement at the beginning of you SPROC. Specifically, you would add three statements so that SSIS can get the metadata it needs: SET FMTONLY OFF
SET NOCOUNT ON
SET ROWCOUNT 0
3) This, by default, is bad juju (also how I almost brought down the server when I deployed my first SSIS package). In essence, SSIS calls the SPROC twice - first when SSIS validates metadata at package startup and next when it actually does the work in the DFT. You won't realize the query lag in development because you only have 100 rows - obviously, a production system with millions or rows is the real deal.
4) To work around this, your SPROC needs to be smart enough to detect how it is being called - for metadata or real data. As a side note, before I forget, I emphasize using SPROCs, because they're easier to maintain than raw SQL queries inside the package itself. The way to work around this is accept a create a "dummy" VARCHAR or NVARCHAR parameter in addition to any other parameters your SPROC supports. The way SSIS gets metadata from the SPROC is it'll pass empty values initially. When this happens, the "dummy" will be an empty string (to be extra safe, TRIM it and CAST it to DATETIME and see if the resulting value is 1/1/1900). This is your SPROCs cue to skip processing and return an empty table so that SISS can get its metadata. This also means that when doing the actual call for getting data, you populate the "dummy" with a junk value.
I hope some/all of this is making sense! Now going to your initial post, here is how I would address the scenario (again, this is not tested, so it could bring down the server

):
1)
Learn the different syntaxes of how to pass and receive parameters when calling a SPROC. This is nicely documented in BOL (Books OnLine). There are times when you'll make ADO.NET calls and others when direct ODBC calls. To get the variables stored in your table, use an Execute SQL task via an ADO.NET connection. Map the parameters to variables. I am assuming these are your "staging" values of datetime variables from your previous ETL.
2) Pass the execution to a DFT. Use an OLE DB source and call the SPROC which retrieves the data (remember the "dummy variable" from above). I am assuming this is also your update batch.
3) Still in the DFT: pass the output of (2) to a Lookup task. This is what has changed in SSIS 2008. In 2005, LTs did not have the ability to handle new rows properly. Some of the creative ones amongst us discovered that a failed lookup essentially means a new row - so we worked around this by diverging the new row to a BULK Insert task. However, in 2008, the LT now has three outputs - update rows, new rows, and error rows!
Ensure that you're performing the lookup only on the PK - by default the LT will lookup all the columns - bad juju.
4) Skip this. IMO, any and all data modifications must be done at the source - you're making SSIS and system do extra work by doing this in-memory. If you still must bring down the system (as I did), this is where you'd any row conversions.
5-6-7) Still in the DFT: of course, the "new rows" output will go to a bulk insert task. However, you know that successful lookups from the current batch mean updates - in most cases, you feed the "update rows" output to a Conditional Split task which allows you to write expressions to detect the exact data modifications. If you have a staging DB as a middle tier, you can pretty much do a batch update for the whole batch here. You can optionally add a Row Count task before feeding any of the outputs to see how many rows were worked on.
8) Get out of the DFT and follow on to another Execute SQL task which updates the batch markers (dates, etc.). Notice that if you were using a staging table, this would become unnecessary as you can then simply query the staging system to get the markers of the last successful ETL window.
9) Optionally, you can send an email at this juncture (I always do) to notify the admin that the task finished without errors (remember that we have a send email task in the OnFailure event).
I did not touch on logging because the idea is the same... create a logging "connection" which is tied to a variable which in turn gets read in from a config key.
Of course, I can keep going on and on and on about SSIS. But I think this is enough to get you started. And believe me, you'll pull your hair out initially, but when you see the whole thing perform, it'll be worth every penny.
Not proof-read.