Thursday, March 29, 2012

bulk upload of files to table

i am using this code to upload file to a TempTable in the SQL SERVER :
"for %%f in ( F:\Temp\*.txt) do bcp ...."
the thing is that this code load all the file that in a given folder to the
table.
is it possible to some how (using a switch or anything else) that only 10
files each time will be uploaded?
thnaks in advance
peleg
On Tue, 19 Jun 2007 01:09:00 -0700, pelegk1
<pelegk1@.discussions.microsoft.com> wrote:

>i am using this code to upload file to a TempTable in the SQL SERVER :
>"for %%f in ( F:\Temp\*.txt) do bcp ...."
>the thing is that this code load all the file that in a given folder to the
>table.
>is it possible to some how (using a switch or anything else) that only 10
>files each time will be uploaded?
>thnaks in advance
>peleg
The FOR loop used in a bat file or at a command prompt has no
provision for working in batches of 10. Even if it somehow was able
to stop at ten, how would it know to start at number 11 the next time?
The simplest alternative I can think of is to write a program that
copies ten files at a time to a sub-folder, invokes the command as
above against the sub-folder, and then deletes the files from the
sub-folder.
Roy Harvey
Beacon Falls, CT
|||One method to call a separate command file that exits once the limit
reached. For example:
REM first command file
@.SET /a FileCount = 0
@.for %%f in ( * ) do CALL test1.cmd "%%f"
REM second command file
@.SET /a FileCount = %FileCount% + 1
@.IF %FileCount% GTR 10 GOTO Done
bcp ... "%1" in ...
:Done
However, I agree with Roy that you are better off with something more robust
than a command file. Perhaps a VBScript will suffice.
Hope this helps.
Dan Guzman
SQL Server MVP
"pelegk1" <pelegk1@.discussions.microsoft.com> wrote in message
news:2AFC6622-4C89-47F1-805C-33EDEE96BEA1@.microsoft.com...
>i am using this code to upload file to a TempTable in the SQL SERVER :
> "for %%f in ( F:\Temp\*.txt) do bcp ...."
> the thing is that this code load all the file that in a given folder to
> the
> table.
> is it possible to some how (using a switch or anything else) that only 10
> files each time will be uploaded?
> thnaks in advance
> peleg
>
sql

bulk upload of files to table

i am using this code to upload file to a TempTable in the SQL SERVER :
"for %%f in ( F:\Temp\*.txt) do bcp ...."
the thing is that this code load all the file that in a given folder to the
table.
is it possible to some how (using a switch or anything else) that only 10
files each time will be uploaded?
thnaks in advance
pelegOn Tue, 19 Jun 2007 01:09:00 -0700, pelegk1
<pelegk1@.discussions.microsoft.com> wrote:

>i am using this code to upload file to a TempTable in the SQL SERVER :
>"for %%f in ( F:\Temp\*.txt) do bcp ...."
>the thing is that this code load all the file that in a given folder to the
>table.
>is it possible to some how (using a switch or anything else) that only 10
>files each time will be uploaded?
>thnaks in advance
>peleg
The FOR loop used in a bat file or at a command prompt has no
provision for working in batches of 10. Even if it somehow was able
to stop at ten, how would it know to start at number 11 the next time?
The simplest alternative I can think of is to write a program that
copies ten files at a time to a sub-folder, invokes the command as
above against the sub-folder, and then deletes the files from the
sub-folder.
Roy Harvey
Beacon Falls, CT|||One method to call a separate command file that exits once the limit
reached. For example:
REM first command file
@.SET /a FileCount = 0
@.for %%f in ( * ) do CALL test1.cmd "%%f"
REM second command file
@.SET /a FileCount = %FileCount% + 1
@.IF %FileCount% GTR 10 GOTO Done
bcp ... "%1" in ...
:Done
However, I agree with Roy that you are better off with something more robust
than a command file. Perhaps a VBScript will suffice.
Hope this helps.
Dan Guzman
SQL Server MVP
"pelegk1" <pelegk1@.discussions.microsoft.com> wrote in message
news:2AFC6622-4C89-47F1-805C-33EDEE96BEA1@.microsoft.com...
>i am using this code to upload file to a TempTable in the SQL SERVER :
> "for %%f in ( F:\Temp\*.txt) do bcp ...."
> the thing is that this code load all the file that in a given folder to
> the
> table.
> is it possible to some how (using a switch or anything else) that only 10
> files each time will be uploaded?
> thnaks in advance
> peleg
>

bulk upload of files to table

i am using this code to upload file to a TempTable in the SQL SERVER :
"for %%f in ( F:\Temp\*.txt) do bcp ...."
the thing is that this code load all the file that in a given folder to the
table.
is it possible to some how (using a switch or anything else) that only 10
files each time will be uploaded?
thnaks in advance
pelegOn Tue, 19 Jun 2007 01:09:00 -0700, pelegk1
<pelegk1@.discussions.microsoft.com> wrote:
>i am using this code to upload file to a TempTable in the SQL SERVER :
>"for %%f in ( F:\Temp\*.txt) do bcp ...."
>the thing is that this code load all the file that in a given folder to the
>table.
>is it possible to some how (using a switch or anything else) that only 10
>files each time will be uploaded?
>thnaks in advance
>peleg
The FOR loop used in a bat file or at a command prompt has no
provision for working in batches of 10. Even if it somehow was able
to stop at ten, how would it know to start at number 11 the next time?
The simplest alternative I can think of is to write a program that
copies ten files at a time to a sub-folder, invokes the command as
above against the sub-folder, and then deletes the files from the
sub-folder.
Roy Harvey
Beacon Falls, CT|||One method to call a separate command file that exits once the limit
reached. For example:
REM first command file
@.SET /a FileCount = 0
@.for %%f in ( * ) do CALL test1.cmd "%%f"
REM second command file
@.SET /a FileCount = %FileCount% + 1
@.IF %FileCount% GTR 10 GOTO Done
bcp ... "%1" in ...
:Done
However, I agree with Roy that you are better off with something more robust
than a command file. Perhaps a VBScript will suffice.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"pelegk1" <pelegk1@.discussions.microsoft.com> wrote in message
news:2AFC6622-4C89-47F1-805C-33EDEE96BEA1@.microsoft.com...
>i am using this code to upload file to a TempTable in the SQL SERVER :
> "for %%f in ( F:\Temp\*.txt) do bcp ...."
> the thing is that this code load all the file that in a given folder to
> the
> table.
> is it possible to some how (using a switch or anything else) that only 10
> files each time will be uploaded?
> thnaks in advance
> peleg
>

Bulk Upload : General operational error

Hi all. Have checked around and tried to fix this, but no joy as of yet.
I have got to the point where i have actually flipped most of the
values in the Bulk Upload component and even commented out all of the
data that is to be interted (and even set schemagen to true).
This is actually the simplest of all the things i have worked on
recently and i think it is to do with the setup of the tables/relations
(done by someone else).
[I include the Sql generates for the related tables at the bottom of
this post]
I am inserting into table B, which has a column which stores the ID of
a key in table A (it is not a key in table B because i can apparently
be NULL sometimes).
Additionally i insert a new lookup record into table C and use the
inverse attribute to get the newly inserted ID and plug this into table
B.
It is ridicously simple. Before I was getting an exception that it
couldn't drop some temporary table, but after i removed all the data
from the Xml file, i guess Bulk Upload just runs through the rest of
the process, but determines there is no data and terminates - it seems
that at that termination point things go wrong.
The message i get is below (although i doubt it will be of much use):
<?xml version="1.0"?><Result
State="FAILED"><Error><HResult>0x8000FFFF</HResult><Description>General
operational
error</Description><Source><![CDATA[]]></Source><Type>FATAL</Type></Erro
r></Result>
I finish this contract tomorrow, so any help before then would be much
appreciated. Who ever said leave the easiest to last !?
steven :: http://stevenR2.com
========================================
=============================
========================================
=============================
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo]. [FK_SchemeAssetValueBAMLDIFundRun_CashFl
owType]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeAssetValueBAMLDIFundRun] DROP CONSTRAINT
FK_SchemeAssetValueBAMLDIFundRun_CashFlo
wType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo]. [FK_SchemeCashFlowNominal_CashFlowType]'
) and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeCashFlowNominal] DROP CONSTRAINT
FK_SchemeCashFlowNominal_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo]. [FK_SchemeFinalSummaryRun_CashFlowType]'
) and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeFinalSummaryRun] DROP CONSTRAINT
FK_SchemeFinalSummaryRun_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo]. [FK_ClientSchemeLBPDetail_CashFlowType]'
) and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLBPDetail] DROP CONSTRAINT
FK_ClientSchemeLBPDetail_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo]. [FK_SchemeLDIFundRevaluationBucketRun_Ca
shFlowType]')
and OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLDIFundRevaluationBucketRun] DROP CONSTRAINT
FK_SchemeLDIFundRevaluationBucketRun_Cas
hFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_BAMLDIFund_CashFlowType]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLDIFundValue] DROP CONSTRAINT
FK_BAMLDIFund_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo]. [FK_SchemePostRebalanceSummaryRun_CashFl
owType]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemePostRebalanceSummaryRun] DROP CONSTRAINT
FK_SchemePostRebalanceSummaryRun_CashFlo
wType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemePostRegearRun_CashFlowType]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemePostRegearRun] DROP CONSTRAINT
FK_SchemePostRegearRun_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ClientScheme_SchemeStatus]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[Scheme] DROP CONSTRAINT FK_ClientScheme_SchemeStatus
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ClientScheme_SchemeType]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[Scheme] DROP CONSTRAINT FK_ClientScheme_SchemeType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_Audit_SystemUser]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[Audit] DROP CONSTRAINT FK_Audit_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ClientScheme_SystemUser]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[Scheme] DROP CONSTRAINT FK_ClientScheme_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeRun_SystemUser]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeRun] DROP CONSTRAINT FK_SchemeRun_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SystemUserSchemeRun_SystemUser]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SystemUserSchemeRun] DROP CONSTRAINT
FK_SystemUserSchemeRun_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_YieldModel_SystemUser]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[YieldModel] DROP CONSTRAINT FK_YieldModel_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeBucket_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeBucket] DROP CONSTRAINT FK_SchemeBucket_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeCashflow_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeCashflow] DROP CONSTRAINT
FK_SchemeCashflow_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeCashFlowNominal_Scheme]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeCashFlowNominal] DROP CONSTRAINT
FK_SchemeCashFlowNominal_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ClientSchemeLBP_ClientScheme]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLBP] DROP CONSTRAINT
FK_ClientSchemeLBP_ClientScheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeLDIFund_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLDIFund] DROP CONSTRAINT
FK_SchemeLDIFund_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeNonLDIFund_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeNonLDIFund] DROP CONSTRAINT
FK_SchemeNonLDIFund_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ThirdPartyAsset_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[ThirdPartyAsset] DROP CONSTRAINT
FK_ThirdPartyAsset_Scheme
GO
/****** Object: Table [dbo].[SchemeCashFlowNominal] Script Date:
29/12/2005 13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[SchemeCashFlowNominal]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[SchemeCashFlowNominal]
GO
/****** Object: Table [dbo].[Scheme] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[Scheme]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[Scheme]
GO
/****** Object: Table [dbo].[CashFlowType] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[CashFlowType]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[CashFlowType]
GO
/****** Object: Table [dbo].[SchemeStatus] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[SchemeStatus]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[SchemeStatus]
GO
/****** Object: Table [dbo].[SchemeType] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[SchemeType]') and OBJECTPROPERTY(id, N'IsUserTable')
= 1)
drop table [dbo].[SchemeType]
GO
/****** Object: Table [dbo].[SystemUser] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[SystemUser]') and OBJECTPROPERTY(id, N'IsUserTable')
= 1)
drop table [dbo].[SystemUser]
GO
/****** Object: Table [dbo].[CashFlowType] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[CashFlowType] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[LastUpdated] [datetime] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SchemeStatus] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[SchemeStatus] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[LastUpdated] [datetime] NOT NULL ,
[MarkedForDeletion] [bit] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SchemeType] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[SchemeType] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[LastUpdated] [datetime] NOT NULL ,
[MarkedForDeletion] [bit] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SystemUser] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[SystemUser] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[UserName] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
[Email] [nvarchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[MarkedForDeletion] [bit] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[Scheme] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[Scheme] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[ReferenceID] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
[SchemeName] [nvarchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
[InterLinkID] [int] NULL ,
[SchemeTypeID] [int] NOT NULL ,
[SchemeStatusID] [int] NOT NULL ,
[MinTransactionAmount] [numeric](18, 10) NULL ,
[SystemUserID] [int] NOT NULL ,
[LastUpdated] [datetime] NOT NULL ,
[MarkedForDeletion] [bit] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SchemeCashFlowNominal] Script Date:
29/12/2005 13:30:25 ******/
CREATE TABLE [dbo].[SchemeCashFlowNominal] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[CashFlowTypeID] [int] NOT NULL ,
[SchemeID] [int] NOT NULL ,
[NominalValue] [numeric](18, 10) NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[CashFlowType] WITH NOCHECK ADD
CONSTRAINT [PK_CashFlowType] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SchemeStatus] WITH NOCHECK ADD
CONSTRAINT [PK_PensionSchemeStatus] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SchemeType] WITH NOCHECK ADD
CONSTRAINT [PK_PensionSchemeType] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SystemUser] WITH NOCHECK ADD
CONSTRAINT [PK_SystemUser] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Scheme] WITH NOCHECK ADD
CONSTRAINT [PK_ClientScheme] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SchemeCashFlowNominal] WITH NOCHECK ADD
CONSTRAINT [PK_SchemeCashFlowNominal] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SchemeStatus] ADD
CONSTRAINT [DF_SchemeStatus_MarkedForDeletion] DEFAULT (0) FOR
[MarkedForDeletion]
GO
ALTER TABLE [dbo].[SchemeType] ADD
CONSTRAINT [DF_SchemeType_MarkedForDeletion] DEFAULT (0) FOR
[MarkedForDeletion]
GO
ALTER TABLE [dbo].[SystemUser] ADD
CONSTRAINT [DF_SystemUser_MarkedForDeletion] DEFAULT (0) FOR
[MarkedForDeletion]
GO
ALTER TABLE [dbo].[Scheme] ADD
CONSTRAINT [DF_Scheme_MarkedForDeletion] DEFAULT (0) FOR
[MarkedForDeletion]
GO
ALTER TABLE [dbo].[Scheme] ADD
CONSTRAINT [FK_ClientScheme_SchemeStatus] FOREIGN KEY
(
[SchemeStatusID]
) REFERENCES [dbo].[SchemeStatus] (
[ID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_ClientScheme_SchemeType] FOREIGN KEY
(
[SchemeTypeID]
) REFERENCES [dbo].[SchemeType] (
[ID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_ClientScheme_SystemUser] FOREIGN KEY
(
[SystemUserID]
) REFERENCES [dbo].[SystemUser] (
[ID]
) NOT FOR REPLICATION
GO
ALTER TABLE [dbo].[SchemeCashFlowNominal] ADD
CONSTRAINT [FK_SchemeCashFlowNominal_CashFlowType] FOREIGN KEY
(
[CashFlowTypeID]
) REFERENCES [dbo].[CashFlowType] (
[ID]
),
CONSTRAINT [FK_SchemeCashFlowNominal_Scheme] FOREIGN KEY
(
[SchemeID]
) REFERENCES [dbo].[Scheme] (
[ID]
)
GOFWIW - here is the xml and schema. I have reduced this to just one
table and it still fails so maybe i am missing something in the schema.
Xml and schema below.
<?xml version="1.0" encoding="utf-8"?>
<SchemaList xmlns="http://tempuri.org/SchemeNominal.xsd">
<Scheme>
<InterLinkID>1</InterLinkID>
<SchemeCashFlowNominalList>
<Items>
<SchemeCashFlowNominal>
<!--
<SchemeID />
<NominalValue>5000</NominalValue>
<CashFlowType>
<Name>Fixed</Name>
<LastUpdated>2005-06-30</LastUpdated>
</CashFlowType>
-->
</SchemeCashFlowNominal>
</Items>
</SchemeCashFlowNominalList>
</Scheme>
</SchemaList>
<?xml version="1.0"?>
<!--
Look up ID in Schema table for the given InterLinkID. Insert this ID
into the SchemaID column of the
SchemeLDIFund table and get a new ID back. Insert this new ID as the
SchemaLDIFundID into the
SchemeLDIFundValue table.
-->
<xsd:schema
xmlns="http://tempuri.org/SchemeNominal.xsd"
targetNamespace="http://tempuri.org/SchemeNominal.xsd"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:sql="urn:schemas-microsoft-com:mapping-schema"
elementFormDefault="qualified">
<!--
<xsd:annotation>
<xsd:appinfo>
<sql:relationship name="NominalToCashFlow"
parent="SchemeCashFlowNominal" parent-key="CashFlowTypeID"
child="CashFlowType" child-key="ID" inverse="true" />
</xsd:appinfo>
</xsd:annotation>
-->
<xsd:element name="SchemaList" sql:is-constant="1">
<xsd:complexType>
<xsd:sequence minOccurs="0" maxOccurs="unbounded">
<xsd:element ref="Scheme" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="Scheme" sql:is-constant="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="InterLinkID" type="xsd:string" minOccurs="1"
/>
<xsd:element name="SchemeCashFlowNominalList"
sql:is-constant="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Items" minOccurs="0"
maxOccurs="unbounded" sql:is-constant="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="SchemeCashFlowNominal"
sql:relation="SchemeCachFlowNominal2">
<xsd:complexType>
<xsd:sequence>
<!--
<xsd:element name="SchemeID" type="xsd:string" minOccurs="0"
sql:field="SchemeID" />
<xsd:element name="NominalValue"
type="xsd:string" minOccurs="1" />-->
<!--
<xsd:element name="CashFlowType"
minOccurs="0" maxOccurs="1" sql:relationship="NominalToCashFlow">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Name"
type="xsd:string" minOccurs="1" />
<xsd:element name="LastUpdated"
type="xsd:string" minOccurs="1" sql:field="LastUpdated" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
-->
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>

Bulk Upload : General operational error

Hi all. Have checked around and tried to fix this, but no joy as of yet.
I have got to the point where i have actually flipped most of the
values in the Bulk Upload component and even commented out all of the
data that is to be interted (and even set schemagen to true).
This is actually the simplest of all the things i have worked on
recently and i think it is to do with the setup of the tables/relations
(done by someone else).
[I include the Sql generates for the related tables at the bottom of
this post]
I am inserting into table B, which has a column which stores the ID of
a key in table A (it is not a key in table B because i can apparently
be NULL sometimes).
Additionally i insert a new lookup record into table C and use the
inverse attribute to get the newly inserted ID and plug this into table
B.
It is ridicously simple. Before I was getting an exception that it
couldn't drop some temporary table, but after i removed all the data
from the Xml file, i guess Bulk Upload just runs through the rest of
the process, but determines there is no data and terminates - it seems
that at that termination point things go wrong.
The message i get is below (although i doubt it will be of much use):
<?xml version="1.0"?><Result
State="FAILED"><Error><HResult>0x8000FFFF</HResult><Description>General
operational
error</Description><Source><![CDATA[]]></Source><Type>FATAL</Type></Erro
r></Result>
I finish this contract tomorrow, so any help before then would be much
appreciated. Who ever said leave the easiest to last !?
steven :: http://stevenR2.com
================================================== ===================
================================================== ===================
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeAssetValueBAMLDIFundRun_CashFlowType]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeAssetValueBAMLDIFundRun] DROP CONSTRAINT
FK_SchemeAssetValueBAMLDIFundRun_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeCashFlowNominal_CashFlowType]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeCashFlowNominal] DROP CONSTRAINT
FK_SchemeCashFlowNominal_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeFinalSummaryRun_CashFlowType]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeFinalSummaryRun] DROP CONSTRAINT
FK_SchemeFinalSummaryRun_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ClientSchemeLBPDetail_CashFlowType]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLBPDetail] DROP CONSTRAINT
FK_ClientSchemeLBPDetail_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeLDIFundRevaluationBucketRun_CashFlowTyp e]')
and OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLDIFundRevaluationBucketRun] DROP CONSTRAINT
FK_SchemeLDIFundRevaluationBucketRun_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_BAMLDIFund_CashFlowType]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLDIFundValue] DROP CONSTRAINT
FK_BAMLDIFund_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemePostRebalanceSummaryRun_CashFlowType]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemePostRebalanceSummaryRun] DROP CONSTRAINT
FK_SchemePostRebalanceSummaryRun_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemePostRegearRun_CashFlowType]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemePostRegearRun] DROP CONSTRAINT
FK_SchemePostRegearRun_CashFlowType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ClientScheme_SchemeStatus]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[Scheme] DROP CONSTRAINT FK_ClientScheme_SchemeStatus
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ClientScheme_SchemeType]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[Scheme] DROP CONSTRAINT FK_ClientScheme_SchemeType
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_Audit_SystemUser]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[Audit] DROP CONSTRAINT FK_Audit_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ClientScheme_SystemUser]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[Scheme] DROP CONSTRAINT FK_ClientScheme_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeRun_SystemUser]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeRun] DROP CONSTRAINT FK_SchemeRun_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SystemUserSchemeRun_SystemUser]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SystemUserSchemeRun] DROP CONSTRAINT
FK_SystemUserSchemeRun_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_YieldModel_SystemUser]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[YieldModel] DROP CONSTRAINT FK_YieldModel_SystemUser
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeBucket_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeBucket] DROP CONSTRAINT FK_SchemeBucket_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeCashflow_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeCashflow] DROP CONSTRAINT
FK_SchemeCashflow_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeCashFlowNominal_Scheme]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeCashFlowNominal] DROP CONSTRAINT
FK_SchemeCashFlowNominal_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ClientSchemeLBP_ClientScheme]') and
OBJECTPROPERTY(id, N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLBP] DROP CONSTRAINT
FK_ClientSchemeLBP_ClientScheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeLDIFund_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeLDIFund] DROP CONSTRAINT
FK_SchemeLDIFund_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_SchemeNonLDIFund_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[SchemeNonLDIFund] DROP CONSTRAINT
FK_SchemeNonLDIFund_Scheme
GO
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[FK_ThirdPartyAsset_Scheme]') and OBJECTPROPERTY(id,
N'IsForeignKey') = 1)
ALTER TABLE [dbo].[ThirdPartyAsset] DROP CONSTRAINT
FK_ThirdPartyAsset_Scheme
GO
/****** Object: Table [dbo].[SchemeCashFlowNominal] Script Date:
29/12/2005 13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[SchemeCashFlowNominal]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[SchemeCashFlowNominal]
GO
/****** Object: Table [dbo].[Scheme] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[Scheme]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[Scheme]
GO
/****** Object: Table [dbo].[CashFlowType] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[CashFlowType]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[CashFlowType]
GO
/****** Object: Table [dbo].[SchemeStatus] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[SchemeStatus]') and OBJECTPROPERTY(id,
N'IsUserTable') = 1)
drop table [dbo].[SchemeStatus]
GO
/****** Object: Table [dbo].[SchemeType] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[SchemeType]') and OBJECTPROPERTY(id, N'IsUserTable')
= 1)
drop table [dbo].[SchemeType]
GO
/****** Object: Table [dbo].[SystemUser] Script Date: 29/12/2005
13:30:23 ******/
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[SystemUser]') and OBJECTPROPERTY(id, N'IsUserTable')
= 1)
drop table [dbo].[SystemUser]
GO
/****** Object: Table [dbo].[CashFlowType] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[CashFlowType] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[LastUpdated] [datetime] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SchemeStatus] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[SchemeStatus] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[LastUpdated] [datetime] NOT NULL ,
[MarkedForDeletion] [bit] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SchemeType] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[SchemeType] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[Name] [nvarchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,
[LastUpdated] [datetime] NOT NULL ,
[MarkedForDeletion] [bit] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SystemUser] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[SystemUser] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[UserName] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
[Email] [nvarchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[MarkedForDeletion] [bit] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[Scheme] Script Date: 29/12/2005
13:30:25 ******/
CREATE TABLE [dbo].[Scheme] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[ReferenceID] [nvarchar] (20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
[SchemeName] [nvarchar] (100) COLLATE SQL_Latin1_General_CP1_CI_AS NOT
NULL ,
[InterLinkID] [int] NULL ,
[SchemeTypeID] [int] NOT NULL ,
[SchemeStatusID] [int] NOT NULL ,
[MinTransactionAmount] [numeric](18, 10) NULL ,
[SystemUserID] [int] NOT NULL ,
[LastUpdated] [datetime] NOT NULL ,
[MarkedForDeletion] [bit] NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
/****** Object: Table [dbo].[SchemeCashFlowNominal] Script Date:
29/12/2005 13:30:25 ******/
CREATE TABLE [dbo].[SchemeCashFlowNominal] (
[ID] [int] IDENTITY (1, 1) NOT NULL ,
[CashFlowTypeID] [int] NOT NULL ,
[SchemeID] [int] NOT NULL ,
[NominalValue] [numeric](18, 10) NOT NULL ,
[RowTimestamp] [timestamp] NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[CashFlowType] WITH NOCHECK ADD
CONSTRAINT [PK_CashFlowType] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SchemeStatus] WITH NOCHECK ADD
CONSTRAINT [PK_PensionSchemeStatus] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SchemeType] WITH NOCHECK ADD
CONSTRAINT [PK_PensionSchemeType] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SystemUser] WITH NOCHECK ADD
CONSTRAINT [PK_SystemUser] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Scheme] WITH NOCHECK ADD
CONSTRAINT [PK_ClientScheme] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SchemeCashFlowNominal] WITH NOCHECK ADD
CONSTRAINT [PK_SchemeCashFlowNominal] PRIMARY KEY CLUSTERED
(
[ID]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[SchemeStatus] ADD
CONSTRAINT [DF_SchemeStatus_MarkedForDeletion] DEFAULT (0) FOR
[MarkedForDeletion]
GO
ALTER TABLE [dbo].[SchemeType] ADD
CONSTRAINT [DF_SchemeType_MarkedForDeletion] DEFAULT (0) FOR
[MarkedForDeletion]
GO
ALTER TABLE [dbo].[SystemUser] ADD
CONSTRAINT [DF_SystemUser_MarkedForDeletion] DEFAULT (0) FOR
[MarkedForDeletion]
GO
ALTER TABLE [dbo].[Scheme] ADD
CONSTRAINT [DF_Scheme_MarkedForDeletion] DEFAULT (0) FOR
[MarkedForDeletion]
GO
ALTER TABLE [dbo].[Scheme] ADD
CONSTRAINT [FK_ClientScheme_SchemeStatus] FOREIGN KEY
(
[SchemeStatusID]
) REFERENCES [dbo].[SchemeStatus] (
[ID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_ClientScheme_SchemeType] FOREIGN KEY
(
[SchemeTypeID]
) REFERENCES [dbo].[SchemeType] (
[ID]
) NOT FOR REPLICATION ,
CONSTRAINT [FK_ClientScheme_SystemUser] FOREIGN KEY
(
[SystemUserID]
) REFERENCES [dbo].[SystemUser] (
[ID]
) NOT FOR REPLICATION
GO
ALTER TABLE [dbo].[SchemeCashFlowNominal] ADD
CONSTRAINT [FK_SchemeCashFlowNominal_CashFlowType] FOREIGN KEY
(
[CashFlowTypeID]
) REFERENCES [dbo].[CashFlowType] (
[ID]
),
CONSTRAINT [FK_SchemeCashFlowNominal_Scheme] FOREIGN KEY
(
[SchemeID]
) REFERENCES [dbo].[Scheme] (
[ID]
)
GO
FWIW - here is the xml and schema. I have reduced this to just one
table and it still fails so maybe i am missing something in the schema.
Xml and schema below.
<?xml version="1.0" encoding="utf-8"?>
<SchemaList xmlns="http://tempuri.org/SchemeNominal.xsd">
<Scheme>
<InterLinkID>1</InterLinkID>
<SchemeCashFlowNominalList>
<Items>
<SchemeCashFlowNominal>
<!--
<SchemeID />
<NominalValue>5000</NominalValue>
<CashFlowType>
<Name>Fixed</Name>
<LastUpdated>2005-06-30</LastUpdated>
</CashFlowType>
-->
</SchemeCashFlowNominal>
</Items>
</SchemeCashFlowNominalList>
</Scheme>
</SchemaList>
<?xml version="1.0"?>
<!--
Look up ID in Schema table for the given InterLinkID. Insert this ID
into the SchemaID column of the
SchemeLDIFund table and get a new ID back. Insert this new ID as the
SchemaLDIFundID into the
SchemeLDIFundValue table.
-->
<xsd:schema
xmlns="http://tempuri.org/SchemeNominal.xsd"
targetNamespace="http://tempuri.org/SchemeNominal.xsd"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:sql="urn:schemas-microsoft-com:mapping-schema"
elementFormDefault="qualified">
<!--
<xsd:annotation>
<xsd:appinfo>
<sql:relationship name="NominalToCashFlow"
parent="SchemeCashFlowNominal" parent-key="CashFlowTypeID"
child="CashFlowType" child-key="ID" inverse="true" />
</xsd:appinfo>
</xsd:annotation>
-->
<xsd:element name="SchemaList" sql:is-constant="1">
<xsd:complexType>
<xsd:sequence minOccurs="0" maxOccurs="unbounded">
<xsd:element ref="Scheme" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="Scheme" sql:is-constant="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="InterLinkID" type="xsd:string" minOccurs="1"
/>
<xsd:element name="SchemeCashFlowNominalList"
sql:is-constant="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Items" minOccurs="0"
maxOccurs="unbounded" sql:is-constant="1">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="SchemeCashFlowNominal"
sql:relation="SchemeCachFlowNominal2">
<xsd:complexType>
<xsd:sequence>
<!--
<xsd:element name="SchemeID" type="xsd:string" minOccurs="0"
sql:field="SchemeID" />
<xsd:element name="NominalValue"
type="xsd:string" minOccurs="1" />-->
<!--
<xsd:element name="CashFlowType"
minOccurs="0" maxOccurs="1" sql:relationship="NominalToCashFlow">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="Name"
type="xsd:string" minOccurs="1" />
<xsd:element name="LastUpdated"
type="xsd:string" minOccurs="1" sql:field="LastUpdated" />
</xsd:sequence>
</xsd:complexType>
</xsd:element>
-->
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>

Bulk Upload

I need to upload multiple reports produced on another server. Is there a
script or utility that will allow me to do this?
Thanks!On Jul 3, 10:58 am, REI <R...@.discussions.microsoft.com> wrote:
> I need to upload multiple reports produced on another server. Is there a
> script or utility that will allow me to do this?
> Thanks!
There is not a quick way to do it; however, this link might give you
some insight.
http://technet.microsoft.com/en-us/library/ms159720.aspx
Regards,
Enrique Martinez
Sr. Software Consultantsql

Bulk Updates taking a long time

We have a huge table with around 25 Million records. We want to reset two int Columns of all records to 0. Currently its taking around 1.5 hours... What are the best practises we can follow to reduce the total update time.

Initially we used - Update <TableName> set <Column1>=0, <Column2>=0.
Later we modified the query to include a WHERE clause and did the update in batch mode such as

DECLARE @.maxCount Int
DECLARE @.iCount Int
SELECT @.MaxCount = Max(ID) FROM organizationsource
SET @.iCount = 0
WHILE (@.iCount<@.MaxCount)
BEGIN
UPDATE <tableName> set <Column1> = 0, <Column2>=0
WHERE ID between @.iCount and @.iCount+1000000
SET @.iCount = @.iCount + 1000000
END

Can you please suggest some tips to improve the update performance.
Can we do something at the SQL Server level / are there any settings at the database level for performing faster updates.

Thanks,
Loonysan

Hard to determine based on the information you've provided so far:

Is the ID field the primary index?|||

Thanks for your interest.

To Answer your Questions

1) Yes - ID field is the primary key in my table.
2) This table will not be accessed by other applications during the update process.
3) Yeah - I have the Data and Log files in different drives. (Should I keep them in different disks for better performance)
4) We are using SQL Server 2005 :)
5) The code is resides in a Stored Procedure

Thanks,
Loonysan

|||

Ok.

Are the columns you are updating indexed also?

This can slow down updates. If so, drop the index and recreate after the update has happened.

Also - make sure the following is turned off to improve performance:

auto create statistics

Bulk UPDATE on SQL Server 2000

I need to do a bulk update of 1 field in a data-table (removing
zero-padding). The update is expected to update 52.5 million records.
What techniques could I use so I can ensure
1) Consistency in the transactional log backups (every 10 mins)
2) Consistency in the full DB backup every day
3) Disk drive of Transaction logs do not get filled up completely (I suspect
even if I set rown count to 10,000 and loop through, disk-space would still
be used')
Target environment
SQL Server 2000 cluster on 2 nodes running Windows 2003 Server
(Different disk drives for data and log files)A normal set based operation will do fine. update my table set myfield =
'mynewvalue'
However, this will create significant locking so you're only going to be
this aggressive on a database that's not in use (Overnight, for most) or
write a cursor that includes some logic in it but that's going to be slow.
DatabaseAdmins.com
Databases Reliable. Available.
"Patrick" wrote:

> I need to do a bulk update of 1 field in a data-table (removing
> zero-padding). The update is expected to update 52.5 million records.
> What techniques could I use so I can ensure
> 1) Consistency in the transactional log backups (every 10 mins)
> 2) Consistency in the full DB backup every day
> 3) Disk drive of Transaction logs do not get filled up completely (I suspe
ct
> even if I set rown count to 10,000 and loop through, disk-space would stil
l
> be used')
> Target environment
> SQL Server 2000 cluster on 2 nodes running Windows 2003 Server
> (Different disk drives for data and log files)|||SQL Server takes care of consistency. To make sure you don't blow up the
transaction log, dividing the update into smaller chunks of updates (e.g.
10,000 rows at a time as you mentioned) is the right approach in general.
I'd like to add that you may have to pace your updates (e.g. sleep for some
time after each iteration) so that your transaction log backups at the
10-minute interval have a chance coming in to clear out the inactive portion
of the log for reuse. Alternative, you can backup the transaction log after
each iteration (or every so many iterations) in the loop. You need to
experiment a bit to find out the best chunk size and whether the loop withou
t
any wait time would be too tight in your particular configuration.
Also, it's certainly possible you can choose to update all the rows in a
single shot, depending on the size of your transaction log.
Linchi
"burt_king" wrote:
[vbcol=seagreen]
> A normal set based operation will do fine. update my table set myfield =
> 'mynewvalue'
> However, this will create significant locking so you're only going to be
> this aggressive on a database that's not in use (Overnight, for most) or
> write a cursor that includes some logic in it but that's going to be slow.
> --
> DatabaseAdmins.com
> Databases Reliable. Available.
>
> "Patrick" wrote:
>|||I have composed the following SQL, could you tell me what is wrong? The
Select getDate() only printed <10 dates! when I am expecting millions of
records to be updated!
DECLARE @.recCount int
SELECT @.recCount=50001
SET rowcount 50000
WHILE @.recCount>0
BEGIN
select getdate()
waitfor delay '000:00:5'
BEGIN TRAN
UPDATE
cor_transactions
SET
id_Entity = SUBSTRING(id_Entity,2,3)
WHERE
len(id_Entity)>3
and
id_Entity like '0%'
IF @.@.ERROR= 0
COMMIT TRAN
ELSE
BEGIN
ROLLBACK TRAN
print 'ERROR' + Cast(@.@.ERROR as varchar(50))
BREAK
END
SELECT @.recCount=@.@.ROWCOUNT
END
SELECT @.recCount=50001
SET rowcount 50000
WHILE @.recCount>0
BEGIN
select getdate()
waitfor delay '000:00:5'
BEGIN TRAN
UPDATE
cor_stocks
SET
id_Entity = SUBSTRING(id_Entity,2,3)
WHERE
len(id_Entity)>3
and
id_Entity like '0%'
IF @.@.ERROR= 0
COMMIT TRAN
ELSE
BEGIN
ROLLBACK TRAN
print 'ERROR' + Cast(@.@.ERROR as varchar(50))
BREAK
END
SELECT @.recCount=@.@.ROWCOUNT
END
SELECT @.recCount=50001
SET rowcount 50000
WHILE @.recCount>0
BEGIN
select getdate()
waitfor delay '000:00:5'
BEGIN TRAN
UPDATE
cor_inventory
SET
id_Entity = SUBSTRING(id_Entity,2,3)
WHERE
len(id_Entity)>3
and
id_Entity like '0%'
IF @.@.ERROR= 0
COMMIT TRAN
ELSE
BEGIN
ROLLBACK TRAN
print 'ERROR' + Cast(@.@.ERROR as varchar(50))
BREAK
END
SELECT @.recCount=@.@.ROWCOUNT
END
"Linchi Shea" wrote:
[vbcol=seagreen]
> SQL Server takes care of consistency. To make sure you don't blow up the
> transaction log, dividing the update into smaller chunks of updates (e.g.
> 10,000 rows at a time as you mentioned) is the right approach in general.
> I'd like to add that you may have to pace your updates (e.g. sleep for som
e
> time after each iteration) so that your transaction log backups at the
> 10-minute interval have a chance coming in to clear out the inactive porti
on
> of the log for reuse. Alternative, you can backup the transaction log afte
r
> each iteration (or every so many iterations) in the loop. You need to
> experiment a bit to find out the best chunk size and whether the loop with
out
> any wait time would be too tight in your particular configuration.
> Also, it's certainly possible you can choose to update all the rows in a
> single shot, depending on the size of your transaction log.
> Linchi
> "burt_king" wrote:
>

Bulk UPDATE on SQL Server 2000

I need to do a bulk update of 1 field in a data-table (removing
zero-padding). The update is expected to update 52.5 million records.
What techniques could I use so I can ensure
1) Consistency in the transactional log backups (every 10 mins)
2) Consistency in the full DB backup every day
3) Disk drive of Transaction logs do not get filled up completely (I suspect
even if I set rown count to 10,000 and loop through, disk-space would still
be used')
Target environment
SQL Server 2000 cluster on 2 nodes running Windows 2003 Server
(Different disk drives for data and log files)A normal set based operation will do fine. update my table set myfield ='mynewvalue'
However, this will create significant locking so you're only going to be
this aggressive on a database that's not in use (Overnight, for most) or
write a cursor that includes some logic in it but that's going to be slow.
--
DatabaseAdmins.com
Databases Reliable. Available.
"Patrick" wrote:
> I need to do a bulk update of 1 field in a data-table (removing
> zero-padding). The update is expected to update 52.5 million records.
> What techniques could I use so I can ensure
> 1) Consistency in the transactional log backups (every 10 mins)
> 2) Consistency in the full DB backup every day
> 3) Disk drive of Transaction logs do not get filled up completely (I suspect
> even if I set rown count to 10,000 and loop through, disk-space would still
> be used')
> Target environment
> SQL Server 2000 cluster on 2 nodes running Windows 2003 Server
> (Different disk drives for data and log files)|||SQL Server takes care of consistency. To make sure you don't blow up the
transaction log, dividing the update into smaller chunks of updates (e.g.
10,000 rows at a time as you mentioned) is the right approach in general.
I'd like to add that you may have to pace your updates (e.g. sleep for some
time after each iteration) so that your transaction log backups at the
10-minute interval have a chance coming in to clear out the inactive portion
of the log for reuse. Alternative, you can backup the transaction log after
each iteration (or every so many iterations) in the loop. You need to
experiment a bit to find out the best chunk size and whether the loop without
any wait time would be too tight in your particular configuration.
Also, it's certainly possible you can choose to update all the rows in a
single shot, depending on the size of your transaction log.
Linchi
"burt_king" wrote:
> A normal set based operation will do fine. update my table set myfield => 'mynewvalue'
> However, this will create significant locking so you're only going to be
> this aggressive on a database that's not in use (Overnight, for most) or
> write a cursor that includes some logic in it but that's going to be slow.
> --
> DatabaseAdmins.com
> Databases Reliable. Available.
>
> "Patrick" wrote:
> > I need to do a bulk update of 1 field in a data-table (removing
> > zero-padding). The update is expected to update 52.5 million records.
> >
> > What techniques could I use so I can ensure
> > 1) Consistency in the transactional log backups (every 10 mins)
> > 2) Consistency in the full DB backup every day
> > 3) Disk drive of Transaction logs do not get filled up completely (I suspect
> > even if I set rown count to 10,000 and loop through, disk-space would still
> > be used')
> >
> > Target environment
> > SQL Server 2000 cluster on 2 nodes running Windows 2003 Server
> > (Different disk drives for data and log files)|||I have composed the following SQL, could you tell me what is wrong? The
Select getDate() only printed <10 dates! when I am expecting millions of
records to be updated!
DECLARE @.recCount int
SELECT @.recCount=50001
SET rowcount 50000
WHILE @.recCount>0
BEGIN
select getdate()
waitfor delay '000:00:5'
BEGIN TRAN
UPDATE
cor_transactions
SET
id_Entity = SUBSTRING(id_Entity,2,3)
WHERE
len(id_Entity)>3
and
id_Entity like '0%'
IF @.@.ERROR= 0
COMMIT TRAN
ELSE
BEGIN
ROLLBACK TRAN
print 'ERROR' + Cast(@.@.ERROR as varchar(50))
BREAK
END
SELECT @.recCount=@.@.ROWCOUNT
END
SELECT @.recCount=50001
SET rowcount 50000
WHILE @.recCount>0
BEGIN
select getdate()
waitfor delay '000:00:5'
BEGIN TRAN
UPDATE
cor_stocks
SET
id_Entity = SUBSTRING(id_Entity,2,3)
WHERE
len(id_Entity)>3
and
id_Entity like '0%'
IF @.@.ERROR= 0
COMMIT TRAN
ELSE
BEGIN
ROLLBACK TRAN
print 'ERROR' + Cast(@.@.ERROR as varchar(50))
BREAK
END
SELECT @.recCount=@.@.ROWCOUNT
END
SELECT @.recCount=50001
SET rowcount 50000
WHILE @.recCount>0
BEGIN
select getdate()
waitfor delay '000:00:5'
BEGIN TRAN
UPDATE
cor_inventory
SET
id_Entity = SUBSTRING(id_Entity,2,3)
WHERE
len(id_Entity)>3
and
id_Entity like '0%'
IF @.@.ERROR= 0
COMMIT TRAN
ELSE
BEGIN
ROLLBACK TRAN
print 'ERROR' + Cast(@.@.ERROR as varchar(50))
BREAK
END
SELECT @.recCount=@.@.ROWCOUNT
END
"Linchi Shea" wrote:
> SQL Server takes care of consistency. To make sure you don't blow up the
> transaction log, dividing the update into smaller chunks of updates (e.g.
> 10,000 rows at a time as you mentioned) is the right approach in general.
> I'd like to add that you may have to pace your updates (e.g. sleep for some
> time after each iteration) so that your transaction log backups at the
> 10-minute interval have a chance coming in to clear out the inactive portion
> of the log for reuse. Alternative, you can backup the transaction log after
> each iteration (or every so many iterations) in the loop. You need to
> experiment a bit to find out the best chunk size and whether the loop without
> any wait time would be too tight in your particular configuration.
> Also, it's certainly possible you can choose to update all the rows in a
> single shot, depending on the size of your transaction log.
> Linchi
> "burt_king" wrote:
> > A normal set based operation will do fine. update my table set myfield => > 'mynewvalue'
> >
> > However, this will create significant locking so you're only going to be
> > this aggressive on a database that's not in use (Overnight, for most) or
> > write a cursor that includes some logic in it but that's going to be slow.
> >
> > --
> > DatabaseAdmins.com
> > Databases Reliable. Available.
> >
> >
> > "Patrick" wrote:
> >
> > > I need to do a bulk update of 1 field in a data-table (removing
> > > zero-padding). The update is expected to update 52.5 million records.
> > >
> > > What techniques could I use so I can ensure
> > > 1) Consistency in the transactional log backups (every 10 mins)
> > > 2) Consistency in the full DB backup every day
> > > 3) Disk drive of Transaction logs do not get filled up completely (I suspect
> > > even if I set rown count to 10,000 and loop through, disk-space would still
> > > be used')
> > >
> > > Target environment
> > > SQL Server 2000 cluster on 2 nodes running Windows 2003 Server
> > > (Different disk drives for data and log files)

Bulk SQL Server 2000 Registration

Hi All,
I have about 60 or so SQL Servers to register. I don't have the option of
importing from another machine.
I think I maybe able to do this via SQL-DMO but not sure how to get started.
Does anyone have any ideas about doing mass SQl Server registrations?
Thanks.Check out:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sqldmo/dmoref_ob_r_62pg.asp
Haven't tried it but you might also be able to create a query and loop for
sp_addserver via QA or OSQL.
HTH
Jerry
"brawtaman" <brawtaman@.discussions.microsoft.com> wrote in message
news:8A49CC26-4F93-49C7-9407-8451F2C38648@.microsoft.com...
> Hi All,
> I have about 60 or so SQL Servers to register. I don't have the option of
> importing from another machine.
> I think I maybe able to do this via SQL-DMO but not sure how to get
> started.
> Does anyone have any ideas about doing mass SQl Server registrations?
> Thanks.|||Thanks. I will try this.
Will keep the community posted ( pun intended)!
"Jerry Spivey" wrote:
> Check out:
> http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sqldmo/dmoref_ob_r_62pg.asp
> Haven't tried it but you might also be able to create a query and loop for
> sp_addserver via QA or OSQL.
> HTH
> Jerry
> "brawtaman" <brawtaman@.discussions.microsoft.com> wrote in message
> news:8A49CC26-4F93-49C7-9407-8451F2C38648@.microsoft.com...
> > Hi All,
> >
> > I have about 60 or so SQL Servers to register. I don't have the option of
> > importing from another machine.
> >
> > I think I maybe able to do this via SQL-DMO but not sure how to get
> > started.
> >
> > Does anyone have any ideas about doing mass SQl Server registrations?
> >
> > Thanks.
>
>|||OK. I have made a little progress.
I got the script to create the Server Groups in EM but it keeps crashing
with a memory error when I try to add the RegisteredServer object to the
RegisteredServers collection. In other words, I am unable to start
registering servers.
I have included the vbscript below:
' Declare variables
Const ForReading = 1, ForWriting = 2
Dim dmoApp, dmoServerGroup, dmoRegServer
Dim fso, MyFile, strServerName
on error resume Next
' Create a ref to the SQL Server Object
Set dmoApp = Wscript.CreateObject("SQLDMO.Application")
' Create a ServerGroup Object
' Set dmoServerGroup = CreateObject("SQLDMO.ServerGroup")
Set dmoServerGroup = Wscript.CreateObject("SQLDMO.ServerGroup")
' Add the CRAWLEY Server Group name
dmoServerGroup.Name = "CRAWLEY"
dmoApp.ServerGroups.Add(dmoServerGroup)
' Add the TAMPA Server Group name
dmoServerGroup.Name = "TAMPA"
dmoApp.ServerGroups.Add(dmoServerGroup)
' Register the TAMPA Servers
Set dmoRegServer = Wscript.CreateObject("SQLDMO.RegisteredServer")
dmoRegServer.UseTrustedConnection = 1
Set fso = Wscript.CreateObject("Scripting.FileSystemObject")
Set MyFile = fso.OpenTextFile("C:\RWray\VBScripts\TampaServers.txt",
ForReading, True)
Do Until MyFile.AtEndOfStream
strServerName = MyFile.Readline
dmoRegServer.Name = strServerName
Wscript.Echo strServerName
dmoServerGroup.RegisteredServers.Add(dmoRegServer)
WScript.Echo Err.Description
Loop
MyFile.Close
Set dmoApp = Nothing
Set dmoServerGroup = Nothing
set dmoRegServer = Nothing
It crashes on the line
dmoServerGroup.RegisteredServers.Add(dmoRegServer).
Thanks.
"brawtaman" wrote:
> Thanks. I will try this.
> Will keep the community posted ( pun intended)!
> "Jerry Spivey" wrote:
> > Check out:
> >
> > http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sqldmo/dmoref_ob_r_62pg.asp
> >
> > Haven't tried it but you might also be able to create a query and loop for
> > sp_addserver via QA or OSQL.
> >
> > HTH
> >
> > Jerry
> > "brawtaman" <brawtaman@.discussions.microsoft.com> wrote in message
> > news:8A49CC26-4F93-49C7-9407-8451F2C38648@.microsoft.com...
> > > Hi All,
> > >
> > > I have about 60 or so SQL Servers to register. I don't have the option of
> > > importing from another machine.
> > >
> > > I think I maybe able to do this via SQL-DMO but not sure how to get
> > > started.
> > >
> > > Does anyone have any ideas about doing mass SQl Server registrations?
> > >
> > > Thanks.
> >
> >
> >

Bulk SQL Server 2000 Registration

Hi All,
I have about 60 or so SQL Servers to register. I don't have the option of
importing from another machine.
I think I maybe able to do this via SQL-DMO but not sure how to get started.
Does anyone have any ideas about doing mass SQl Server registrations?
Thanks.Check out:
http://msdn.microsoft.com/library/d... />
r_62pg.asp
Haven't tried it but you might also be able to create a query and loop for
sp_addserver via QA or OSQL.
HTH
Jerry
"brawtaman" <brawtaman@.discussions.microsoft.com> wrote in message
news:8A49CC26-4F93-49C7-9407-8451F2C38648@.microsoft.com...
> Hi All,
> I have about 60 or so SQL Servers to register. I don't have the option of
> importing from another machine.
> I think I maybe able to do this via SQL-DMO but not sure how to get
> started.
> Does anyone have any ideas about doing mass SQl Server registrations?
> Thanks.|||Thanks. I will try this.
Will keep the community posted ( pun intended)!
"Jerry Spivey" wrote:

> Check out:
> http://msdn.microsoft.com/library/d...>
b_r_62pg.asp
> Haven't tried it but you might also be able to create a query and loop for
> sp_addserver via QA or OSQL.
> HTH
> Jerry
> "brawtaman" <brawtaman@.discussions.microsoft.com> wrote in message
> news:8A49CC26-4F93-49C7-9407-8451F2C38648@.microsoft.com...
>
>|||OK. I have made a little progress.
I got the script to create the Server Groups in EM but it keeps crashing
with a memory error when I try to add the RegisteredServer object to the
RegisteredServers collection. In other words, I am unable to start
registering servers.
I have included the vbscript below:
' Declare variables
Const ForReading = 1, ForWriting = 2
Dim dmoApp, dmoServerGroup, dmoRegServer
Dim fso, MyFile, strServerName
on error resume Next
' Create a ref to the SQL Server Object
Set dmoApp = Wscript.CreateObject("SQLDMO.Application")
' Create a ServerGroup Object
' Set dmoServerGroup = CreateObject("SQLDMO.ServerGroup")
Set dmoServerGroup = Wscript.CreateObject("SQLDMO.ServerGroup")
' Add the CRAWLEY Server Group name
dmoServerGroup.Name = "CRAWLEY"
dmoApp.ServerGroups.Add(dmoServerGroup)
' Add the TAMPA Server Group name
dmoServerGroup.Name = "TAMPA"
dmoApp.ServerGroups.Add(dmoServerGroup)
' Register the TAMPA Servers
Set dmoRegServer = Wscript.CreateObject("SQLDMO.RegisteredServer")
dmoRegServer.UseTrustedConnection = 1
Set fso = Wscript.CreateObject("Scripting.FileSystemObject")
Set MyFile = fso.OpenTextFile("C:\RWray\VBScripts\TampaServers.txt",
ForReading, True)
Do Until MyFile.AtEndOfStream
strServerName = MyFile.Readline
dmoRegServer.Name = strServerName
Wscript.Echo strServerName
dmoServerGroup.RegisteredServers.Add(dmoRegServer)
WScript.Echo Err.Description
Loop
MyFile.Close
Set dmoApp = Nothing
Set dmoServerGroup = Nothing
set dmoRegServer = Nothing
It crashes on the line
dmoServerGroup.RegisteredServers.Add(dmoRegServer).
Thanks.
"brawtaman" wrote:
[vbcol=seagreen]
> Thanks. I will try this.
> Will keep the community posted ( pun intended)!
> "Jerry Spivey" wrote:
>sql

Bulk SQL Server 2000 Registration

Hi All,
I have about 60 or so SQL Servers to register. I don't have the option of
importing from another machine.
I think I maybe able to do this via SQL-DMO but not sure how to get started.
Does anyone have any ideas about doing mass SQl Server registrations?
Thanks.
Check out:
http://msdn.microsoft.com/library/de..._ob_r_62pg.asp
Haven't tried it but you might also be able to create a query and loop for
sp_addserver via QA or OSQL.
HTH
Jerry
"brawtaman" <brawtaman@.discussions.microsoft.com> wrote in message
news:8A49CC26-4F93-49C7-9407-8451F2C38648@.microsoft.com...
> Hi All,
> I have about 60 or so SQL Servers to register. I don't have the option of
> importing from another machine.
> I think I maybe able to do this via SQL-DMO but not sure how to get
> started.
> Does anyone have any ideas about doing mass SQl Server registrations?
> Thanks.
|||Thanks. I will try this.
Will keep the community posted ( pun intended)!
"Jerry Spivey" wrote:

> Check out:
> http://msdn.microsoft.com/library/de..._ob_r_62pg.asp
> Haven't tried it but you might also be able to create a query and loop for
> sp_addserver via QA or OSQL.
> HTH
> Jerry
> "brawtaman" <brawtaman@.discussions.microsoft.com> wrote in message
> news:8A49CC26-4F93-49C7-9407-8451F2C38648@.microsoft.com...
>
>
|||OK. I have made a little progress.
I got the script to create the Server Groups in EM but it keeps crashing
with a memory error when I try to add the RegisteredServer object to the
RegisteredServers collection. In other words, I am unable to start
registering servers.
I have included the vbscript below:
' Declare variables
Const ForReading = 1, ForWriting = 2
Dim dmoApp, dmoServerGroup, dmoRegServer
Dim fso, MyFile, strServerName
on error resume Next
' Create a ref to the SQL Server Object
Set dmoApp = Wscript.CreateObject("SQLDMO.Application")
' Create a ServerGroup Object
' Set dmoServerGroup = CreateObject("SQLDMO.ServerGroup")
Set dmoServerGroup = Wscript.CreateObject("SQLDMO.ServerGroup")
' Add the CRAWLEY Server Group name
dmoServerGroup.Name = "CRAWLEY"
dmoApp.ServerGroups.Add(dmoServerGroup)
' Add the TAMPA Server Group name
dmoServerGroup.Name = "TAMPA"
dmoApp.ServerGroups.Add(dmoServerGroup)
' Register the TAMPA Servers
Set dmoRegServer = Wscript.CreateObject("SQLDMO.RegisteredServer")
dmoRegServer.UseTrustedConnection = 1
Set fso = Wscript.CreateObject("Scripting.FileSystemObject")
Set MyFile = fso.OpenTextFile("C:\RWray\VBScripts\TampaServers. txt",
ForReading, True)
Do Until MyFile.AtEndOfStream
strServerName = MyFile.Readline
dmoRegServer.Name = strServerName
Wscript.Echo strServerName
dmoServerGroup.RegisteredServers.Add(dmoRegServer)
WScript.Echo Err.Description
Loop
MyFile.Close
Set dmoApp = Nothing
Set dmoServerGroup = Nothing
set dmoRegServer = Nothing
It crashes on the line
dmoServerGroup.RegisteredServers.Add(dmoRegServer) .
Thanks.
"brawtaman" wrote:
[vbcol=seagreen]
> Thanks. I will try this.
> Will keep the community posted ( pun intended)!
> "Jerry Spivey" wrote:

bulk sql insert task can do tables with identity?

i got some bulk insert tasks in SSIS inserting into some tables with identity set ON....

in 1 column. Can the bulk insert task go smoothly?

PS: i cannot find anywhere in the bulk insert task that can set the ignore identity columns...........

now my steps are prepare database -> create database -> bulk insert into tables

working on my previous problem, Jamie.

Open the Bulk Insert Task, select the Options page, select Options, open the drop down. Check "Enable Identity Insert".

Bulk repopulation of a table

Hi, I have a table T with 3 int columns, I need to update it every X
hours with a query that returns the same 3 int columns. Currently I'm
doing it using a cursor iterating over each row and checking whether I
need to DELETE/UPDATE/INSERT it, this however isn't very efficient.
So I tested 2 additional methods, one involving deleting all the rows
from the target table and then inserting everything and the other by
dropping the table, recreating and inseting.
The results are not very surprising:
PROC,CPU,READS,WRITES,DURATION
DROP,2720,177757,218,4000
DELE,3150,183060,230,5300
CURS,8200,504818,247,8240
Which indicates that DROPing and reCREATing is the way to go, any
suggestions?
I don't like the DROP/CREATE scheme since it might involve the TABLEs
downtime (even when I CREATE a T1 then DROP T and sp_rename T1->T). If
there is no better way, will there be problems due to sp_rename
changing the name but not the ID of an object?
P.S. it's SQL2005, and the query itself executes in (2300,11596,0,2730)
[same column names as above].
Thanks.Can you give some more information so we can provide a more realistic
answer? The DDL for the existing table would help along with how or where
you are getting the data to update the table with. If the new data is a
complete up to date set then you have several options. I am assuming it is a
flat file but since you didn't provide that information it is only a guess.
I would look at using BULK INSERT to load the new data into a table that you
create. Lets call it X. Then you can prep this table to get it exactly like
you want it to be, drop the original table and rename X to what the old one
was. You may find that renaming the old one first, renaming X and then
dropping might yield the least time that the original table is off-line to
the users. But either way it should be a matter of a second or less.
Andrew J. Kelly SQL MVP
<johnsolver@.gmail.com> wrote in message
news:1137942204.960433.107600@.g49g2000cwa.googlegroups.com...
> Hi, I have a table T with 3 int columns, I need to update it every X
> hours with a query that returns the same 3 int columns. Currently I'm
> doing it using a cursor iterating over each row and checking whether I
> need to DELETE/UPDATE/INSERT it, this however isn't very efficient.
> So I tested 2 additional methods, one involving deleting all the rows
> from the target table and then inserting everything and the other by
> dropping the table, recreating and inseting.
> The results are not very surprising:
> PROC,CPU,READS,WRITES,DURATION
> DROP,2720,177757,218,4000
> DELE,3150,183060,230,5300
> CURS,8200,504818,247,8240
> Which indicates that DROPing and reCREATing is the way to go, any
> suggestions?
> I don't like the DROP/CREATE scheme since it might involve the TABLEs
> downtime (even when I CREATE a T1 then DROP T and sp_rename T1->T). If
> there is no better way, will there be problems due to sp_rename
> changing the name but not the ID of an object?
> P.S. it's SQL2005, and the query itself executes in (2300,11596,0,2730)
> [same column names as above].
> Thanks.
>|||Instead of deleting all rows from the table you can truncate it. Its not
a good idea to drop and recreate table.
Or
you can first store the data in one temporary and then update related
data from temporary table in permanent table using TSQL.
Please post data and how you want to update it.
Regards
Amish Shah
*** Sent via Developersdex http://www.examnotes.net ***|||Hi, sorry for not providing enough info:
the table:
CREATE TABLE [dbo].[T](
[tid] [int] NOT NULL,
[pid] [int] NOT NULL DEFAULT (0),
[r] [int] NOT NULL,
CONSTRAINT [PK_T] PRIMARY KEY CLUSTERED
(
[t] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY];
The query returns the exact 3 columns, simplistically: SELECT tid,pid,r
FROM A (it returns ~90k rows) I don't think that the exact query
matters... I'll be happy to provide additional info if needed.
Thanks.|||As to updating data, assuming T contains the following:
tid,pid,r
1,3,6
2,3,40
5,4,60
and the query (Q) returns
tid,pid,r
1,3,50
8,1,1000
Then I would like T to contain exactly what Q returned, namely:
DELETE FROM T WHERE tid=2
DELETE FROM T WHERE tid=5
UPDATE T SET r=50 WHERE tid=1
INSERT INTO T (tid,pid,r) VALUES(8,1,1000)
etc.|||If you are updating the value for r in table T, based on a match of both tid
and pid from the source table, then
--create a temporary table that has the data to be imported.
SELECT tid, pid, r INTO #importtable FROM (...the rest of your import query
...)
--Delete records from T that aren't in the new import batch
DELETE FROM T WHERE NOT EXISTS
(SELECT i.tid FROM #importtable i WHERE i.tid = T.tid)
--update the records that have matching tid and pid values
UPDATE T SET r = i.r FROM #importtable i
INNER JOIN T ON T.tid=i.tid AND T.pid = i.pid
--INSERT new records into T that didn't exist before
INSERT INTO T
SELECT tid,pid,r FROM #importtable i
WHERE NOT EXISTS
(SELECT tid FROM T WHERE T.tid = i.tid)
This gets more complicated if the PK on T is composite; I couldn't really
tell from the DDL you posted earlier.
Of course, it may be faster to just truncate the table and import in the new
data, as suggested earlier by Amish.
Another suggestion is to encapsulate the above into an INSTEAD OF trigger,
and then you could simply run
INSERT INTO T SELECT ...rest of your import query here ...
The INSTEAD OF trigger would fire before any primary key constraints would
be checked, so it could acheive what you want.
"johnsolver@.gmail.com" wrote:

> As to updating data, assuming T contains the following:
> tid,pid,r
> 1,3,6
> 2,3,40
> 5,4,60
> and the query (Q) returns
> tid,pid,r
> 1,3,50
> 8,1,1000
> Then I would like T to contain exactly what Q returned, namely:
> DELETE FROM T WHERE tid=2
> DELETE FROM T WHERE tid=5
> UPDATE T SET r=50 WHERE tid=1
> INSERT INTO T (tid,pid,r) VALUES(8,1,1000)
> etc.
>|||Thanks for the reply Mark,
I've tried your suggestion, performance-wise it's better than the
CURSOR solution (not surprising) but it's not as good as DROP or
TRUNCATE approaches. The key isn't a composite it's only one column as
you guessed, tid.
Amish: Why isn't it a good idea to DROP/CREATE the table? (except for
the minimal downtime due to sp_rename)?
Thanks.|||Johnsol
I also suggested you
you can first store the data in one temporary and then update related
data from temporary table in permanent table using TSQL.
But I was unable to give solution untill go post DDL.
Second
when you change the table name causes the sp, view and references
invalid which uses this table until you recreate new table with old
name.
If you drop and recreate table you have to recreate all relations ,
indexes again. You can not guarantee referential integrity of data
of othe tables if this table is part of any foregin key relationship
with them, since this table drops many times.
In some cases we have seen that updating data in the table was not
possible using TSQL easily and you have to create large number of temp
tables and check number of conditions and check data from number of
tables.
So I have gave you all the options but untill I get the data I was not
able to give you some solution.
Regards
Amish Shah|||Thanks for the reply Amish,
in the end I'll probably go with CREATE and DROP simply because it's
performance is about 1.5 times faster than the temp table solution, I
don't have any foreign keys/views built on the table.
So I've opted for the following (DDL follows), btw. should I wrap the
whole sequence in BEGIN TRANS ... COMIT TRANS?
CREATE TABLE [dbo].[T_1](
[tid] [int] NOT NULL,
[pid] [int] NOT NULL DEFAULT ((0)),
[r] [int] NOT NULL,
CONSTRAINT [PK_T_1] PRIMARY KEY CLUSTERED
(
[tid] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
INSERT INTO T_1 SELECT tid,pid,r FROM A;
DROP TABLE T;
EXEC sp_rename 'T_1','T';
EXEC sp_rename 'T.PK_T_1','PK_T','INDEX';|||Yes at least you should do all in transaction.
Regards
Amish Shah

Bulk Move

hi,
I am using Microsoft SQL server 2005. I want to move a large set of
data with a specific condition to other database in the same machine.
for example after a period of time I want to archive data by the
condtion of time ( in a specific year ) to other database. that
archive database is going to store the previous data which are
archived.
Am I going to use queries to move the data for each table? or there is
a better way?
what is the best approach to do so?
Thanks in advance,
AliAli
What is large set of data means?
You can have a job which does an INSERT statement every day ( that's
probably is not too large set fo data) or
INSERT INTO.....dbname.dbo.tabl and having properly created indexes on the
source tables
"Ali" <nikzad.a@.gmail.com> wrote in message
news:1191158805.093132.214280@.22g2000hsm.googlegroups.com...
> hi,
> I am using Microsoft SQL server 2005. I want to move a large set of
> data with a specific condition to other database in the same machine.
> for example after a period of time I want to archive data by the
> condtion of time ( in a specific year ) to other database. that
> archive database is going to store the previous data which are
> archived.
> Am I going to use queries to move the data for each table? or there is
> a better way?
> what is the best approach to do so?
> Thanks in advance,
> Ali
>|||Why are you archiving data? If the purpose is manageability, you might
consider partitioning instead of moving data to another database (assuming
SQL 2005 Enterprise Edition). You can also use a partition switch to
efficiently delete or insert mass amounts of data by partition as part of
your move process.
You'll need to use queries to extract and delete data if partitioning is not
an option. Bulk insert (especially minimally logged) is faster than INSERT
when large amounts of data are involved. A common technique to load large
amounts of data is to remove indexes on the destination table, perform a
minimally logged bulk insert (perhaps in a SSIS package) and then recreate
indexes and constraints.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Ali" <nikzad.a@.gmail.com> wrote in message
news:1191158805.093132.214280@.22g2000hsm.googlegroups.com...
> hi,
> I am using Microsoft SQL server 2005. I want to move a large set of
> data with a specific condition to other database in the same machine.
> for example after a period of time I want to archive data by the
> condtion of time ( in a specific year ) to other database. that
> archive database is going to store the previous data which are
> archived.
> Am I going to use queries to move the data for each table? or there is
> a better way?
> what is the best approach to do so?
> Thanks in advance,
> Ali
>|||do you want to COPY or MOVE the data?
if it's a copy you can use the synchronization feature of SQL Server. this
process take a copy of the source database at a regular basis (up to a
realtime synchonization)
if you want to control everything (like which rows you want to copy by
applying filters) and/or want to delete the source data (so moving the
data):
you can do it by running SQL scripts or using SSIS
so you have to write the SQL statements required to copy the expected rows
and delete them in the source if you want to remove them.
"Ali" <nikzad.a@.gmail.com> wrote in message
news:1191158805.093132.214280@.22g2000hsm.googlegroups.com...
> hi,
> I am using Microsoft SQL server 2005. I want to move a large set of
> data with a specific condition to other database in the same machine.
> for example after a period of time I want to archive data by the
> condtion of time ( in a specific year ) to other database. that
> archive database is going to store the previous data which are
> archived.
> Am I going to use queries to move the data for each table? or there is
> a better way?
> what is the best approach to do so?
> Thanks in advance,
> Ali
>|||On Sep 30, 5:02 pm, "Dan Guzman" <guzma...@.nospam-
online.sbcglobal.net> wrote:
> Why are you archiving data? If the purpose is manageability, you might
> consider partitioning instead of moving data to another database (assuming
> SQL 2005 Enterprise Edition). You can also use a partition switch to
> efficiently delete or insert mass amounts of data by partition as part of
> your move process.
> You'll need to use queries to extract and delete data if partitioning is not
> an option. Bulk insert (especially minimally logged) is faster than INSERT
> when large amounts of data are involved. A common technique to load large
> amounts of data is to remove indexes on the destination table, perform a
> minimally logged bulk insert (perhaps in a SSIS package) and then recreate
> indexes and constraints.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Ali" <nikza...@.gmail.com> wrote in message
> news:1191158805.093132.214280@.22g2000hsm.googlegroups.com...
> > hi,
> > I am using Microsoft SQL server 2005. I want to move a large set of
> > data with a specific condition to other database in the same machine.
> > for example after a period of time I want to archive data by the
> > condtion of time ( in a specific year ) to other database. that
> > archive database is going to store the previous data which are
> > archived.
> > Am I going to use queries to move the data for each table? or there is
> > a better way?
> > what is the best approach to do so?
> > Thanks in advance,
> > Ali
Thanks for the answers. so I think I'd better use partitioning than
copying or moving them to other database.
My reason was the speed of retrieving data. I think partitioning can
do the Job.
Thanks,
Ali|||> My reason was the speed of retrieving data. I think partitioning can
> do the Job.
Partitioning will help manageability but not necessarily performance. The
key to data retrieval performance is indexing and query tuning. Appropriate
indexes reduce the amount of data that needs to be touched without
physically moving data data via archiving or partitioning.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Ali" <nikzad.a@.gmail.com> wrote in message
news:1191166646.655159.267330@.g4g2000hsf.googlegroups.com...
> On Sep 30, 5:02 pm, "Dan Guzman" <guzma...@.nospam-
> online.sbcglobal.net> wrote:
>> Why are you archiving data? If the purpose is manageability, you might
>> consider partitioning instead of moving data to another database
>> (assuming
>> SQL 2005 Enterprise Edition). You can also use a partition switch to
>> efficiently delete or insert mass amounts of data by partition as part of
>> your move process.
>> You'll need to use queries to extract and delete data if partitioning is
>> not
>> an option. Bulk insert (especially minimally logged) is faster than
>> INSERT
>> when large amounts of data are involved. A common technique to load
>> large
>> amounts of data is to remove indexes on the destination table, perform a
>> minimally logged bulk insert (perhaps in a SSIS package) and then
>> recreate
>> indexes and constraints.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Ali" <nikza...@.gmail.com> wrote in message
>> news:1191158805.093132.214280@.22g2000hsm.googlegroups.com...
>> > hi,
>> > I am using Microsoft SQL server 2005. I want to move a large set of
>> > data with a specific condition to other database in the same machine.
>> > for example after a period of time I want to archive data by the
>> > condtion of time ( in a specific year ) to other database. that
>> > archive database is going to store the previous data which are
>> > archived.
>> > Am I going to use queries to move the data for each table? or there is
>> > a better way?
>> > what is the best approach to do so?
>> > Thanks in advance,
>> > Ali
> Thanks for the answers. so I think I'd better use partitioning than
> copying or moving them to other database.
> My reason was the speed of retrieving data. I think partitioning can
> do the Job.
> Thanks,
> Ali
>

Bulk Logging - Took longer.

Hello,
We have a job that runs for a duration - n minutes.
There are quite a few stored procedures in it that use the 'select into'
statement.
Tried switching over to 'Bulk logging' to reduce the duration of the run.
Observed that the run actually took n+30 minutes.
Our expectation was that it would be either n or n minus something.
Did not expect the duration to increase.
What could the reason be behind this?
Cheers
SQLCatZ
SQLCatz
Have you looked at an execution plan of the SELECT * INTO .. statements
Do you have any WHERE conditions in the source table? How big is the
table/s? Have you seen optimizer was available to use index to produce the
query?
"SQLCatz" <SQLCatz@.discussions.microsoft.com> wrote in message
news:092FC6D0-FC93-4143-9886-4D1C0D61CED7@.microsoft.com...
> Hello,
> We have a job that runs for a duration - n minutes.
> There are quite a few stored procedures in it that use the 'select into'
> statement.
> Tried switching over to 'Bulk logging' to reduce the duration of the run.
> Observed that the run actually took n+30 minutes.
> Our expectation was that it would be either n or n minus something.
> Did not expect the duration to increase.
> What could the reason be behind this?
> Cheers
> SQLCatZ
>
|||Uri,
I have'nt looked at the execution plans as yet.
But can confirm that the source table has a where clause and the columns in
it are part of existing indexes.
Cheers!
SQLCatZ
sql

Bulk Logging - Took longer.

Hello,
We have a job that runs for a duration - n minutes.
There are quite a few stored procedures in it that use the 'select into'
statement.
Tried switching over to 'Bulk logging' to reduce the duration of the run.
Observed that the run actually took n+30 minutes.
Our expectation was that it would be either n or n minus something.
Did not expect the duration to increase.
What could the reason be behind this?
Cheers
SQLCatZSQLCatz
Have you looked at an execution plan of the SELECT * INTO .. statements
Do you have any WHERE conditions in the source table? How big is the
table/s? Have you seen optimizer was available to use index to produce the
query?
"SQLCatz" <SQLCatz@.discussions.microsoft.com> wrote in message
news:092FC6D0-FC93-4143-9886-4D1C0D61CED7@.microsoft.com...
> Hello,
> We have a job that runs for a duration - n minutes.
> There are quite a few stored procedures in it that use the 'select into'
> statement.
> Tried switching over to 'Bulk logging' to reduce the duration of the run.
> Observed that the run actually took n+30 minutes.
> Our expectation was that it would be either n or n minus something.
> Did not expect the duration to increase.
> What could the reason be behind this?
> Cheers
> SQLCatZ
>|||Uri,
I have'nt looked at the execution plans as yet.
But can confirm that the source table has a where clause and the columns in
it are part of existing indexes.
Cheers!
SQLCatZ

Bulk Logging - Took longer.

Hello,
We have a job that runs for a duration - n minutes.
There are quite a few stored procedures in it that use the 'select into'
statement.
Tried switching over to 'Bulk logging' to reduce the duration of the run.
Observed that the run actually took n+30 minutes.
Our expectation was that it would be either n or n minus something.
Did not expect the duration to increase.
What could the reason be behind this?
Cheers
SQLCatZSQLCatz
Have you looked at an execution plan of the SELECT * INTO .. statements
Do you have any WHERE conditions in the source table? How big is the
table/s? Have you seen optimizer was available to use index to produce the
query?
"SQLCatz" <SQLCatz@.discussions.microsoft.com> wrote in message
news:092FC6D0-FC93-4143-9886-4D1C0D61CED7@.microsoft.com...
> Hello,
> We have a job that runs for a duration - n minutes.
> There are quite a few stored procedures in it that use the 'select into'
> statement.
> Tried switching over to 'Bulk logging' to reduce the duration of the run.
> Observed that the run actually took n+30 minutes.
> Our expectation was that it would be either n or n minus something.
> Did not expect the duration to increase.
> What could the reason be behind this?
> Cheers
> SQLCatZ
>|||Uri,
I have'nt looked at the execution plans as yet.
But can confirm that the source table has a where clause and the columns in
it are part of existing indexes.
Cheers!
SQLCatZ

Bulk Logged Recovery Model

Hello all,
If a database is set to bulk-logged recovery model, does the transaction
log truncates itself (as it does in the simple model) or do you have to
set up a backup tlog job like you do in the full recovery model for it
to truncate itself? (SQL 2000)
Thanks,
Raziq.
*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!No it does not truncate as it would in simple.
Also if you are in bulk-logged and do a tran log backup then the data =files must be accessible. Thus in bulk logged mode you cannot rescue the =tail of the log (Using backup log with no truncate) unless the db files =are intact. Full mode will let you do this. Thus there is a difference =in disaster recovery planning.
Mike John
"Raziq Shekha" <raziq_shekha@.anadarko.com> wrote in message =news:OD5tN9duDHA.536@.tk2msftngp13.phx.gbl...
> Hello all,
> > If a database is set to bulk-logged recovery model, does the =transaction
> log truncates itself (as it does in the simple model) or do you have =to
> set up a backup tlog job like you do in the full recovery model for it
> to truncate itself? (SQL 2000)
> > Thanks,
> Raziq.
> > > *** Sent via Developersdex http://www.developersdex.com ***
> Don't just participate in USENET...get rewarded for it!|||"Raziq Shekha" <raziq_shekha@.anadarko.com> wrote in message
news:OD5tN9duDHA.536@.tk2msftngp13.phx.gbl...
> If a database is set to bulk-logged recovery model, does the transaction
> log truncates itself (as it does in the simple model) or do you have to
> set up a backup tlog job like you do in the full recovery model for it
> to truncate itself? (SQL 2000)
While a database is set to bulk-logged the transaction log won't it won't
truncate itself, it still allows transaction log recoverability while
minimally logging bulk import activity. You're correct that you'd need to
setup a backup transaction log task. For more information see BOL 'Using
Recovery Modes' and 'Bulk-Logged Recovery'
Steve

Bulk logged or Simple

Hi,
We are using Simple recovery model for one of our Datamart
database. We choose simple model since we don't want to
worry about backup the transaction log as we can rebuild
datamart from oltp database and also to have effective use
of minimal-logging (we are using select..into and bulk
insert)....
When i was testing datamart database i found that with
Simple recovery model sql server do minimal logging..
but i am reading earlier posts and it is mentioned that
only with bulk-logged recovery model sql server do minimal
logging..
Is simple recovery model do logging same as Bulk logged
(minimal logging for SELECT..INTO , BULK INSERT etc) or
full(complete logging)'
Thanks
--Harvinderu specified:
>no bulk logged operations are logged in case
>of simple recovery model.
Is that means in simple recovery model SELECT..INTO is
minimally logged or fully logged?
I tested it using dbcc sqlperf(logspace) and i can see it
is minimally logged in Sinple mode.
Thanks
--Harvinder
>--Original Message--
>>only with bulk-logged recovery model sql server do
minimal
>>logging..
>The correct statement is in case of bulk logged recovery
model bulk logged
>operations like bulk insert/bcp are minimally logged
>In case of simple recovery model transaction log size
will be manageable
>because transaction log will be truncated
>each time a checkpoint occurs. no bulk logged operations
are logged in case
>of simple recovery model.
>
>Refer to following topics under BOL
>"Selecting a Recovery Model"
>"simple recovery"
>"bulk-logged recovery"
>"Selecting a Recovery Model"
>- Vishal
>"harvinder" <hs@.metratech.com> wrote in message
>news:031c01c37bb9$8e878c40$a301280a@.phx.gbl...
>> Hi,
>>
>> We are using Simple recovery model for one of our
Datamart
>> database. We choose simple model since we don't want to
>> worry about backup the transaction log as we can rebuild
>> datamart from oltp database and also to have effective
use
>> of minimal-logging (we are using select..into and bulk
>> insert)....
>> When i was testing datamart database i found that with
>> Simple recovery model sql server do minimal logging..
>> but i am reading earlier posts and it is mentioned that
>> only with bulk-logged recovery model sql server do
minimal
>> logging..
>> Is simple recovery model do logging same as Bulk logged
>> (minimal logging for SELECT..INTO , BULK INSERT etc) or
>> full(complete logging)'
>> Thanks
>> --Harvinder
>
>.
>|||yes, in case of simple recovery model bulk logged operations are minimally
logged. And transaction log will be truncated at the checkpoint. These
operations are fully logged only in case of "Full recovery" model.
- vishal
"harvinder" <hs@.metratech.com> wrote in message
news:04f201c37bbe$91c511c0$a101280a@.phx.gbl...
> u specified:
> >no bulk logged operations are logged in case
> >of simple recovery model.
> Is that means in simple recovery model SELECT..INTO is
> minimally logged or fully logged?
> I tested it using dbcc sqlperf(logspace) and i can see it
> is minimally logged in Sinple mode.
>
> Thanks
> --Harvinder
>
> >--Original Message--
> >>only with bulk-logged recovery model sql server do
> minimal
> >>logging..
> >
> >The correct statement is in case of bulk logged recovery
> model bulk logged
> >operations like bulk insert/bcp are minimally logged
> >
> >In case of simple recovery model transaction log size
> will be manageable
> >because transaction log will be truncated
> >each time a checkpoint occurs. no bulk logged operations
> are logged in case
> >of simple recovery model.
> >
> >
> >Refer to following topics under BOL
> >
> >"Selecting a Recovery Model"
> >"simple recovery"
> >"bulk-logged recovery"
> >"Selecting a Recovery Model"
> >
> >- Vishal
> >"harvinder" <hs@.metratech.com> wrote in message
> >news:031c01c37bb9$8e878c40$a301280a@.phx.gbl...
> >> Hi,
> >>
> >>
> >> We are using Simple recovery model for one of our
> Datamart
> >> database. We choose simple model since we don't want to
> >> worry about backup the transaction log as we can rebuild
> >> datamart from oltp database and also to have effective
> use
> >> of minimal-logging (we are using select..into and bulk
> >> insert)....
> >> When i was testing datamart database i found that with
> >> Simple recovery model sql server do minimal logging..
> >> but i am reading earlier posts and it is mentioned that
> >> only with bulk-logged recovery model sql server do
> minimal
> >> logging..
> >>
> >> Is simple recovery model do logging same as Bulk logged
> >> (minimal logging for SELECT..INTO , BULK INSERT etc) or
> >> full(complete logging)'
> >>
> >> Thanks
> >> --Harvinder
> >>
> >
> >
> >.
> >