Thursday, March 29, 2012

BULK LOGGED Operations

I'm seeing some odd behavior:
When applying an initial snaphot to a subscriber, I keep getting errors
saying the log for the database is full and a rollback occurs.
The weird tying is that I have the recovery model set to BULK LOGGED and I
also created a job that dumps the transaction log every two minutes, yet the
log keeps filling up supposedly.
Not to mention the log is 15GB in size...
Anyone ever see something like this?
Thanks!
You may want to try running the distribution agent with a -BcpBatchSize of
2^31 - delta (pick delta so the resulting batch size is > number of rows in
your table) and see if the log still grows to a large size. Please be
forewarned that you may see the "Agent is suspect" message when applying the
bcp file with a large bcp batch size since the distribution agent will not be
able to post a progress message until the entire table is bulk-copied in and
you should be ok as long as the log doesn't grow uncontrollably.
Do you know if there are any nonclustered indexes created on the table
before the BCP? If only the clustered was created before the bcp, do you know
if a load ordering hint was used? (Use SQL Profiler to find out) If the
distribution agent is creating nonclustered indexes before the BCP, you may
want to remove the nonclustered indexes in the idx scripts before applying
the snapshot and manually create them afterwards.
I have a feeling that you have enabled the "do not lock tables during
snapshot" for the publication but please let me know if this is not the case.
One last thing that you may want to try is to change the -MaxBcpThreads
setting of the distribution agent to > 1. The idea is that if you have
enabled the "do not lock table during snapshot" publication option, setting
-MaxBcpThreads > 1 may allow your biggest table to be bulk-copied by an
auxiliary thread that is not part of the transaction that the distribution
agent creates on the main thread when you enable the "do not lock.. " option.
HTH
-Raymond
"A. Robinson" wrote:

> I'm seeing some odd behavior:
> When applying an initial snaphot to a subscriber, I keep getting errors
> saying the log for the database is full and a rollback occurs.
> The weird tying is that I have the recovery model set to BULK LOGGED and I
> also created a job that dumps the transaction log every two minutes, yet the
> log keeps filling up supposedly.
> Not to mention the log is 15GB in size...
> Anyone ever see something like this?
> Thanks!
|||The table it keeps dying on has 3,422,613 rows in it...
Are you suggesting that I make the commit batch size something like 4,000,000?
And you're right: I did generate the snapshot with the DO NOT LOCK TABLES
option. I've set the MaxBCPThreads to 2.
Does this sound good to you?
Thanks!
"Raymond Mak [MSFT]" wrote:
[vbcol=seagreen]
> You may want to try running the distribution agent with a -BcpBatchSize of
> 2^31 - delta (pick delta so the resulting batch size is > number of rows in
> your table) and see if the log still grows to a large size. Please be
> forewarned that you may see the "Agent is suspect" message when applying the
> bcp file with a large bcp batch size since the distribution agent will not be
> able to post a progress message until the entire table is bulk-copied in and
> you should be ok as long as the log doesn't grow uncontrollably.
> Do you know if there are any nonclustered indexes created on the table
> before the BCP? If only the clustered was created before the bcp, do you know
> if a load ordering hint was used? (Use SQL Profiler to find out) If the
> distribution agent is creating nonclustered indexes before the BCP, you may
> want to remove the nonclustered indexes in the idx scripts before applying
> the snapshot and manually create them afterwards.
> I have a feeling that you have enabled the "do not lock tables during
> snapshot" for the publication but please let me know if this is not the case.
> One last thing that you may want to try is to change the -MaxBcpThreads
> setting of the distribution agent to > 1. The idea is that if you have
> enabled the "do not lock table during snapshot" publication option, setting
> -MaxBcpThreads > 1 may allow your biggest table to be bulk-copied by an
> auxiliary thread that is not part of the transaction that the distribution
> agent creates on the main thread when you enable the "do not lock.. " option.
> HTH
> -Raymond
> "A. Robinson" wrote:
|||Raymond:
Just for the record: it worked like a charm!!!!
A thousand thank you's!!!
"A. Robinson" wrote:
[vbcol=seagreen]
> The table it keeps dying on has 3,422,613 rows in it...
> Are you suggesting that I make the commit batch size something like 4,000,000?
> And you're right: I did generate the snapshot with the DO NOT LOCK TABLES
> option. I've set the MaxBCPThreads to 2.
> Does this sound good to you?
> Thanks!
> "Raymond Mak [MSFT]" wrote:
sql

No comments:

Post a Comment