Showing posts with label package. Show all posts
Showing posts with label package. Show all posts

Tuesday, March 27, 2012

Bulk Load Help - Parent Child relationship in schema

I am running a bulk load from a DTS package. I am able to insert the parent records succesfully into the article table, but I cannot get the child data to insert at all. The package returns an "executed succesfully" message, so I am having a tough time debugging. Here is my schema file, and a sample of the XML file I am trying to load. Please let me know if you spot anything in the schema I can fix (the xml file is supplied to me, and I have no control over it) All I am trying to insert in the child table (xmlTopic) is the Article_ID and the Topic_Type. Thanks in advance.

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:sql="urn:schemas-microsoft-com:mapping-schema">
<xs:annotation>
<xs:appinfo>
<sql:relationship name="ArticleTopic"
parent="xmlARTICLE"
parent-key="ARTICLE_ID"
child="xmlTOPIC"
child-key="ARTICLE_ID" />
</xs:appinfo>
</xs:annotation>
<xs:element name="ARTICLE" sql:relation="xmlArticle" sql:key-fields="ARTICLE_ID">
<xs:complexType>
<xs:sequence>
<xs:element name="ATTRIBUTION" type="xs:string" sql:field="ATTRIBUTION"/>
<xs:element name="BLURB" type="xs:string" sql:field="BLURB"/>
<xs:element name="BODY" type="xs:string" sql:field="ARTICLE_BODY"/>
<xs:element name="BRAND" type="xs:string" sql:field="BRAND"/>
<xs:element name="BYLINE" type="xs:string" sql:field="BYLINE"/>
<xs:element name="COPYRIGHT" type="xs:string" sql:field="COPYRIGHT"/>
<xs:element name="FEATURE_BLURB" type="xs:string" sql:field="FEATURE_BLURB"/>
<xs:element name="FEATURE_IMAGE" type="xs:string" sql:field="FEATURE_IMAGE"/>
<xs:element name="HEADLINE" type="xs:string" sql:field="HEADLINE"/>
<xs:element name="NEWS_TYPE" type="xs:string" sql:field="NEWS_TYPE"/>
<xs:element name="SOURCE" type="xs:string" sql:field="SOURCE"/>
<xs:element name="TEASER" type="xs:string" sql:field="TEASER"/>
<xs:element name="URL" type="xs:string" sql:field="URL"/>
<xs:element name="TOPICS" sql:mapped="false">
<xs:complexType>
<xs:sequence>
<xs:element name="TOPIC" sql:relation="xmlTopic" sql:relationship="ArticleTopic" sql:key-fields="xmlTopicID">
<xs:complexType>
<xs:attribute name="TOPIC_TYPE" type="xs:string" sql:field="TOPIC_TYPE"/>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute name="ARTICLE_ID" type="xs:string" sql:field="ARTICLE_ID"/>
<xs:attribute name="POSTING_DATE" type="xs:string" sql:field="POSTING_DATE"/>
<xs:attribute name="ARCHIVE_DATE" type="xs:string" sql:field="ARCHIVE_DATE"/>
</xs:complexType>
</xs:element>
</xs:schema>

<?xml version="1.0"?>
<!DOCTYPE NEWSFEED SYSTEM "http://www.blah.com">
<NEWSFEED>
<ARTICLE ARTICLE_ID="531320" POSTING_DATE="08-Mar-2006" ARCHIVE_DATE="01-Mar-2007">
<NEWS_TYPE>blah</NEWS_TYPE>
<HEADLINE><![CDATA[blah]]></HEADLINE>
<BLURB><![CDATA[blah]]></BLURB>
<BYLINE><![CDATA[blah]]></BYLINE>
<BODY><![CDATA[blahblahblah]]></BODY>
<ATTRIBUTION><![CDATA[blah]]></ATTRIBUTION>
<SOURCE><![CDATA[blah]]></SOURCE>
<FEATURE_BLURB><![CDATA[blah]]></FEATURE_BLURB>
<FEATURE_IMAGE></FEATURE_IMAGE>
<TEASER><![CDATA[blah]]></TEASER>
<COPYRIGHT><![CDATA[blah]]></COPYRIGHT>
<BRAND><![CDATA[vlah]]></BRAND>
<URL><![CDATA[blah]></URL>
<TOPICS>
<TOPIC TOPIC_TYPE="BLPR"/>
<TOPIC TOPIC_TYPE="BLPR2"/>
</TOPICS>
</ARTICLE>
</NEWSFEED>

Well it turns out I answered my own question. I checked the MSDN Library(http://msdn.microsoft.com/library/default.asp?url=/library/en-us/sqlxml3/htm/bulkload_7pv0.asp - see sql:relationship and the Key Ordering Rule), and found the following info:

"This means that in defining the <Customer> element, you must specify the CustomerID attribute in the schema before you specify <sql:relationship>. Otherwise, when an <Order> element enters into scope, XML Bulk Load generates a record for the CustOrder table; and when the XML Bulk Load reaches the </Order> end tag, it sends the record to SQL Server without the CustomerID foreign key column value."

So I put my sql:relationship at the end of the schema file, and it worked like a charm. As reference, here is my revised schema file:

<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:sql="urn:schemas-microsoft-com:mapping-schema">

<xsd:element name="ARTICLE" sql:relation="xmlArticle">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="ATTRIBUTION" type="xsd:string" sql:field="ATTRIBUTION"/>
<xsd:element name="BLURB" type="xsd:string" sql:field="BLURB"/>
<xsd:element name="BODY" type="xsd:string" sql:field="ARTICLE_BODY"/>
<xsd:element name="BRAND" type="xsd:string" sql:field="BRAND"/>
<xsd:element name="BYLINE" type="xsd:string" sql:field="BYLINE"/>
<xsd:element name="COPYRIGHT" type="xsd:string" sql:field="COPYRIGHT"/>
<xsd:element name="FEATURE_BLURB" type="xsd:string" sql:field="FEATURE_BLURB"/>
<xsd:element name="FEATURE_IMAGE" type="xsd:string" sql:field="FEATURE_IMAGE"/>
<xsd:element name="HEADLINE" type="xsd:string" sql:field="HEADLINE"/>
<xsd:element name="NEWS_TYPE" type="xsd:string" sql:field="NEWS_TYPE"/>
<xsd:element name="SOURCE" type="xsd:string" sql:field="SOURCE"/>
<xsd:element name="TEASER" type="xsd:string" sql:field="TEASER"/>
<xsd:element name="URL" type="xsd:string" sql:field="URL"/>
<xsd:element name="TOPIC" sql:relation="xmlTopic" sql:relationship="ArticleTopic">
<xsd:complexType>
<xsd:attribute name="TYPE2" type="xsd:string" sql:field="TopicType"/>
<xsd:attribute name="TOPIC_TYPE" type="xsd:string" sql:field="TopicID"/>
<xsd:attribute name="SIGNIFICANCE_FACTOR" type="xsd:string" sql:field="SignificanceFactor"/>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
<xsd:attribute name="ARTICLE_ID" type="xsd:string" sql:field="ARTICLE_ID"/>
<xsd:attribute name="POSTING_DATE" type="xsd:string" sql:field="POSTING_DATE"/>
<xsd:attribute name="ARCHIVE_DATE" type="xsd:string" sql:field="ARCHIVE_DATE"/>
</xsd:complexType>
</xsd:element>

<xsd:annotation>
<xsd:appinfo>
<sql:relationship name="ArticleTopic"
parent="xmlArticle"
parent-key="ARTICLE_ID"
child="xmlTopic"
child-key="ARTICLE_ID" />
</xsd:appinfo>
</xsd:annotation>

</xsd:schema>

sql

bulk load change in 2005?

I have a dts package that does some bulk load operations without a problem in 2000. When trying to convert them to SISS or trying to run the same dts package against the a copy of the database (in 2000 mode) running on a 2005 server I get Bulk load data conversion errors (truncation) for row xxxx....

Has the bulk load process changed in 2005?

My Sproc is using a bulk insert command with a fmt file..

Even when you are using SSIS for this; I think this question is better asked in the databse engine forum...|||

Daveko wrote:

Has the bulk load process changed in 2005?

Yes, there have been certain changes in the bulk load that may cause seemingly fine SQL Server 2000 scenarios to fail when they run against SQL Server 2005: the engine now enforces stricter data purity checks and additional physical consistency checks for the input data streams both in BCP and BULK INSERT/OPENROWSET.|||

Is there any way to relax those checks? I was under the assumption that running a database in 2000 mode would allow backward compatibility. I am under a huge time crunch now, and at this point my only contingency plan is to keep a 2000 server up for this specific database.

Any assistance would be appreciated..

|||Unfortunately, there is no 'officially' documented way of relaxing those checks. Please contact CSS - they might be able to assist you.|||

What is CSS?

I have cleaned up my data and got the import to run successfully.

|||CSS stands for Customer Service and Support. It used to be called PSS.

bulk load change in 2005?

I have a dts package that does some bulk load operations without a problem in 2000. When trying to convert them to SISS or trying to run the same dts package against the a copy of the database (in 2000 mode) running on a 2005 server I get Bulk load data conversion errors (truncation) for row xxxx....

Has the bulk load process changed in 2005?

My Sproc is using a bulk insert command with a fmt file..

Even when you are using SSIS for this; I think this question is better asked in the databse engine forum...|||

Daveko wrote:

Has the bulk load process changed in 2005?

Yes, there have been certain changes in the bulk load that may cause seemingly fine SQL Server 2000 scenarios to fail when they run against SQL Server 2005: the engine now enforces stricter data purity checks and additional physical consistency checks for the input data streams both in BCP and BULK INSERT/OPENROWSET.|||

Is there any way to relax those checks? I was under the assumption that running a database in 2000 mode would allow backward compatibility. I am under a huge time crunch now, and at this point my only contingency plan is to keep a 2000 server up for this specific database.

Any assistance would be appreciated..

|||Unfortunately, there is no 'officially' documented way of relaxing those checks. Please contact CSS - they might be able to assist you.|||

What is CSS?

I have cleaned up my data and got the import to run successfully.

|||CSS stands for Customer Service and Support. It used to be called PSS.

Sunday, March 25, 2012

Bulk Insert with another outstanding result set should be run with XACT_ABORT on

All,

I have an SSIS package which calls several other SSIS packages. The "mother" package has TrasactionOption set to "Supported". There is a sequence container and an imbedded Execute Package Task for each of my embedded packages. Each of the sequence containers has TransactionOption = Required. DTC is running.

When I run my mother package I get the following error within the first package executed "Bulk Insert with another outstanding result set should be run with XACT_ABORT on." I've looked high and low for others with this problem, but haven't found any resolution. Can anyone tell me how to resolve this error? I have tried using BEGIN TRANSACTION instead of using Required and I turn on XACT_ABORT, but it doesn't seem to help either. Any help on this would be greatly appreciated.

Thanks

Update on this - I was able to get around this problem by switching to using native SQL Server transactions. However, it still puzzles me that I can't use DTC instead. Please, any help on this problem would be greatly appreciated. Just to add a little more information - there is a SQL Server 2005 instance (destination) and a SQL Server 2000 instance (source) involved. Microsoft? Any answer on this?|||I'm getting the same problem.

An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Bulk Insert with another outstanding result set should be run with XACT_ABORT on."

What does it mean, 'another outstanding resultset'?

I have a bulk insert updating table A (multicast also creates data to update table A history)

This is the first task in a series of 4 with the other 3 occuring directly after the first one, all doing a similar thing. i.e. Bulk insert to table B and table B historysql

Bulk Insert with another outstanding result set should be run with XACT_ABORT on

All,

I have an SSIS package which calls several other SSIS packages. The "mother" package has TrasactionOption set to "Supported". There is a sequence container and an imbedded Execute Package Task for each of my embedded packages. Each of the sequence containers has TransactionOption = Required. DTC is running.

When I run my mother package I get the following error within the first package executed "Bulk Insert with another outstanding result set should be run with XACT_ABORT on." I've looked high and low for others with this problem, but haven't found any resolution. Can anyone tell me how to resolve this error? I have tried using BEGIN TRANSACTION instead of using Required and I turn on XACT_ABORT, but it doesn't seem to help either. Any help on this would be greatly appreciated.

Thanks

Update on this - I was able to get around this problem by switching to using native SQL Server transactions. However, it still puzzles me that I can't use DTC instead. Please, any help on this problem would be greatly appreciated. Just to add a little more information - there is a SQL Server 2005 instance (destination) and a SQL Server 2000 instance (source) involved. Microsoft? Any answer on this?

Bulk Insert with Added Columns

Hi, I'd like to do a bulk copy insert inside a dts package but the table i'm
inserting to has more columns than the text file. I know you can change the
format file to take care of this, but how do you insert actual values into
those extra columns instead of just NULLS? The values for those extra column
s
are uniquely identified by the text file.
What I could do is do a bulk insert and then run an execute sql command
afterwards which updates the table by replacing the NULL values with actual
values. However, this db is multi-user with each user having different value
s
to insert, so if two imports are done at the same time, someone could be
updating the rows of another person's import. Locking the table is also not
a
possibility because the imports take a long time and we don't want one user
waiting on another one to finish.
Right now, I'm using an ActiveX script in a transform data task in the DTS
package. So there's two transformations going on in that task: one is the
copy column from the text file, and another is adding values to the extra
columns. This works well, however, it requires row by row processing and thu
s
it takes over 3 times longer than a bulk insert.
Does anyone have a solution to setting values of extra columns in a bulk
insert? Any help is appreciated. Thanks!Where do those extra values come from?
ML|||These extra values are global variables in the dts package
"ML" wrote:

> Where do those extra values come from?
>
> ML|||In that case you can design a set-based solution. You need something like th
is:
1) you need access to the file - one way to achieve this is linking the file
as a linked server. Read more on this here:
http://msdn.microsoft.com/library/d... />
a_8gqa.asp
http://msdn.microsoft.com/library/d...ma_ini_file.asp
2) you need access to your global variables - the best way would be to
create a procedure that accesses the linked file and accepts the values of
those global variables as parameters;
3) in your procedure you then combine the values from the source file and
the values from the DTS, and insert them into the destination table(s).
You could even use a folder-sniffer to start the job automatically, as soon
as the source file becomes available in the source folder.
ML

Sunday, March 11, 2012

Bulk Insert Import Question

Hi:
I am importing data from text files in sql tables using bulk insert object
in dts package. First row of text files contains headings. For some text
files, headings are also imported in the tables and for some headings are no
t
imported. I am not sure why is that happening.
I do not want to import headings. Is there anyway to tell bulk insert
object to not import first row (headings)? Please let me know.
Thanks a lotBULK INSERT [YourTable] FROM 'c:\yourfile.txt'
WITH (FIRSTROW = 5)
Where 5 indicates "start importing from the 5th row of text". Adjust for
your import as necessary. Don't use quotation marks around the number.
"Mike" wrote:

> Hi:
> I am importing data from text files in sql tables using bulk insert object
> in dts package. First row of text files contains headings. For some text
> files, headings are also imported in the tables and for some headings are
not
> imported. I am not sure why is that happening.
> I do not want to import headings. Is there anyway to tell bulk insert
> object to not import first row (headings)? Please let me know.
> Thanks a lot|||Thank you very much
"Mark Williams" wrote:
> BULK INSERT [YourTable] FROM 'c:\yourfile.txt'
> WITH (FIRSTROW = 5)
> Where 5 indicates "start importing from the 5th row of text". Adjust for
> your import as necessary. Don't use quotation marks around the number.
> --
>
> "Mike" wrote:
>

Thursday, March 8, 2012

Bulk Insert Error

Simple test project. Created Flat File connection, database connection (both local), and Bulk Insert Task. When running the package I get the following error:

[Bulk Insert Task] Error: An error occurred with the following error message: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.Bulk load: An unexpected end of file was encountered in the data file.".

I've tried different settings for the Flat File config, and the database connection, but still get the error. Any suggestions would be helpful.

Tks.

Have you looked into the error message? Specifically the part about the task encountering an unexpected end of file?|||Yes. The file seems fine. The Flat File connection tool reads the file fine (i.e. it displays the records appropriately when creating the connection). I can read this file with any number of other tools (e.g. import into Excel).|||

Here's what you can do to narrow down the problem:

1. Use Profiler to capture the bulk insert statement generated by the bulk insert task.

2. Look at the options in the bulk insert statement to make sure they are correct. You may get this error if the options, like row delimiter, are not set correctly. In that case, you can change them in the bulk insert task UI.

You can also run the bulk insert statement outside of SSIS (e.g. Management Studio) to make sure the statement is correct.

Friday, February 24, 2012

Bulk Import Diacritics Issue

Hi There,
When I do a bulk import from a DTS Package into a table the import fails at
record that contains diacritic characters like C, Í and E. The field type is
nvarchar.
Is it a collaction type I need to assign the field?
Do you have any idea how I can bypass this?
Thanks,
Kevin.Hi Kevin
"Kevin Humphreys" wrote:
> Hi There,
> When I do a bulk import from a DTS Package into a table the import fails at
> record that contains diacritic characters like C, Ã? and E. The field type is
> nvarchar.
> Is it a collaction type I need to assign the field?
> Do you have any idea how I can bypass this?
> Thanks,
> Kevin.
>
This seems that you need to change the codepage (-C) or specify a unicode
(-w or -N) parameters for BCP. Check in books online for the values.
John

Bulk Import Diacritics Issue

Hi There,
When I do a bulk import from a DTS Package into a table the import fails at
record that contains diacritic characters like C, and E. The field type is
nvarchar.
Is it a collaction type I need to assign the field?
Do you have any idea how I can bypass this?
Thanks,
Kevin.Hi Kevin
"Kevin Humphreys" wrote:

> Hi There,
> When I do a bulk import from a DTS Package into a table the import fails a
t
> record that contains diacritic characters like C, í and E. The field type
is
> nvarchar.
> Is it a collaction type I need to assign the field?
> Do you have any idea how I can bypass this?
> Thanks,
> Kevin.
>
This seems that you need to change the codepage (-C) or specify a unicode
(-w or -N) parameters for BCP. Check in books online for the values.
John

Tuesday, February 14, 2012

Building/Running a Package.

Hi,

I have a solution, and I have a few projects in this solution. Each project has a few packages in them. The problem/question I have is this: when I am working on one of the packages, I have only this package open. When I try to run/test this package, every single package in every project gets opened, recompiled/rebuilt before my current package gets to run. It's very frastrating and time consuming. Is there a way to somehow disable this weird behavior ? Is there a way to just build the package I am currently working on ?

Please, help.

Thanks,

Victor.

This behavior occurs if you configured the project to build deployment utility - unfortunately, opening package was required for this. You may switch the deployment utility off temporary to avoid this.|||

Thank you, Michael. I'll try that.

Otherwise, I'll have to create a solution for each package separately.

Sunday, February 12, 2012

Building a Truly Platform Agnostic SSIS Package (SQL Server and Oracle Interop)

As painful as it is proving out to be, I am trying to create a single package that is vendor neutral between SQL Server and Oracle.

At first, I thought that as long as I was going OLEDB in and out, and I ensured that my SQL syntax was generic enough, I'd be OK.

Well, not exactly. I am finding there is more to this. Specifically, metadata gets generated differently based on the source/targets.

For example, on an OLE DB Source task, datatypes will vary (i.e. NUMBER in Oracle vs Int in SQL). The job here is to pick the common denominator for the SSIS specific type. Time consuming, but doable, I think.

Another issue is on an OLE DB Desitnation. If you choose Table for Data Access Mode, this value gets written to the dtsx XML file. So, even when both RDBMS have the same schema (which is an obvious prereq) if choosing SQL Server "dbo.DimTable" will get written to the file and for Oracle "SCHEMA.DIMTABLE" will get written.

So, I am am wondering, what is the best way to address this?

My inital thought was using a dtsConfig file (which I am already using extensively) and set the target table names accordingly. This approach would have the added benefit of allowing for post-deployment configuration changes should a table name change, for example.

This section of the dtsx file shows the values of interest:

Code Snippet

<component id="138" name="OLE_DST_DimTable" componentClassID="{E2568105-9550-4F71-A638-B7FE42E66922}" description="OLE DB Destination" localeId="-1" usesDispositions="true" validateExternalMetadata="True" version="4" pipelineVersion="0" contactInfo="OLE DB Destination;Microsoft Corporation;Microsoft SqlServer v9; (C) 2005 Microsoft Corporation; All Rights Reserved; http://www.microsoft.com/sql/support;4">
<properties>
<property'>http://www.microsoft.com/sql/support;4">http://www.microsoft.com/sql/support;4">
<properties>
<property id="139" name="CommandTimeout" dataType="System.Int32" state="default" isArray="false" description="The number of seconds before a command times out. A value of 0 indicates an infinite time-out." typeConverter="" UITypeEditor="" containsID="false" expressionType="None">0</property>
<property id="140" name="OpenRowset" dataType="System.String" state="default" isArray="false" description="Specifies the name of the database object used to open a rowset." typeConverter="" UITypeEditor="" containsID="false" expressionType="None">"ORASCHEMA"."DIMTABLE"</property>

<!-- More Properies -->
</component>

Ideally, I'd like configuration time control over the name attribute of the component element (highlighted) so that I can set the value of the property element with the OpenRowset attribute type (also highlighted). This way, presumambly as long as datatypes were generic enough, the mapping would just work.

But, in walking through the dtsConfig tree, I don't see these elements or attributes exposed.

I would sincerely appreciate any suggestions on how to accomplish this.

Rick

The OpenRowset property is not exposed through Configurations. You can try using the Table From Variable access mode. This goes through OpenRowset with the table name stored in a variable. You can then use Configurations to update the variable.

|||Great idea, I will try that thanks.|||

Hi Ted,

How do you recommend I go about setting the variable? I have used the Script Task in the Control Flow, but Script Component Task seemed somewhat unintuitive given its directional contexts (source, destination, transform). I don't want to put the script in the Control Flow aspect because I want to scope the variable iniatialization to the specific Data Flow Task that is in context at the time.

It seems that I want to just put a script in the data flow task that checks the provider value and if SQL, sets the variable to "dbo.TableName" and if Oracle sets the variable to "SCHEMA.TABLENAME".

You mention "You can then use Configurations to update the variable." Can you please elaborate?

Thanks,


Rick

|||

RickGaribay.NET wrote:

Hi Ted,

How do you recommend I go about setting the variable? I have used the Script Task in the Control Flow, but Script Component Task seemed somewhat unintuitive given its directional contexts (source, destination, transform). I don't want to put the script in the Control Flow aspect because I want to scope the variable iniatialization to the specific Data Flow Task that is in context at the time.

It seems that I want to just put a script in the data flow task that checks the provider value and if SQL, sets the variable to "dbo.TableName" and if Oracle sets the variable to "SCHEMA.TABLENAME".

You mention "You can then use Configurations to update the variable." Can you please elaborate?

Thanks,


Rick

Package configurations are a mechanism to set various component and variable properties through external means, such as XML configuration files. Take a look in SQL Server Books Online: http://msdn2.microsoft.com/en-us/library/ms141682.aspx

A few additional links:

http://sqljunkies.com/WebLog/knight_reign/archive/2004/12/07/5445.aspx http://www.sqlis.com/26.aspx|||

Thanks Matthew.

We came up with a potential solution simply involving the initialization and assignment of named variables with the specific fully qualified table name based on the provider. To your point, the added benefit of this is that if we wanted to ever override these values, we could simply expose them via package configurations.

Thanks again,

Rick

Building a mailing list

Hi folks,

I'm building a DTS package which needs to mail a list of users nightly

The mailing list needs to be dynamic so I'm using the dynamic properties tab to populate the To, From etc. fields in an SMTP DTS task.

I need to construct a script which will run within the DTS package and build a mailing list file from a table of users in the connected db. The Dynamic properties task will then pull from this data file when populating the 'To' field.

The basic select statement is simple (e.g. SELECT email FROM employees WHERE role = 'mgr') however I need the output to be a single line of email addresses separated by commas (e.g. Email1,Email2,Email3...etc).

I'm a bit unsure on how to go about doing / writing this.

Can anyone help?

Thanks,

Davethis would be far easier if you trashed the dts, built a stored procedure and then created a job to run the procedure.|||Thought about that but my Stored Proc skills are nothing to write home about. If you have anything I could tweak to suit my needs I'd be grateful.

Thanks,

Dave|||Try running this sql through Query Analyzer:declare @.MyEmailString varchar(8000)

select @.MyEmailString = coalesce(@.MyEmailString + ',', '') + coalesce(email, '')
from employees
where role = 'MGR'

select @.MyEmailString|||Thanks Blindman.

Still trying to get my head around the syntax and use of Coalesce but it works.

Thanks again,

Dave|||Hi Blindman,

I need to run the code below against a MySql database.
It works fine when I modify it for an MS SQL db however I'm getting syntax errors when running it against a MySQL (5.1.9) db.

Any ideas?

Thanks,

Dave|||I wouldn't be surprised if this syntax didn't work in MySQL. Try posting your question in the MySQL forum.|||Lord, where's R937 when somebody mentions his favorite toy! You probably want the GROUP_CONCAT (http://dev.mysql.com/doc/refman/5.1/en/group-by-functions.html) function in MySQL.

<Afterthought>
After posting this, I realized that Rudy would probably pitch a fit, mostly because he enjoy's pitching fits. Note that I do NOT consider MySQL to be a toy, Rudy just likes to play with it because MySQL offers functions that violate the basic rules of relational algebra (such as the GROUP_CONCAT() function). This violation of hte rules doesn't make MySQL a bad product in any way, although I see it as removing MySQL from the category of Relational databases since it doesn't follow the basic rules required for a relational database.

-PatP

Friday, February 10, 2012

Build Query button in DTS 2000 Package Designer

When trying to modify a DTS package in DTS 2000 Package Designer in SQL Server 2005, the Build Query button on a Transform SQL Data Task box is not enabled.

Does anyone know why?

If its not a complex package then why don't you try build in SSIS.