Showing posts with label table. Show all posts
Showing posts with label table. Show all posts

Wednesday, March 28, 2012

Ok, time to start a war

Trivia perhaps, but this has bothered me for a long time. Over the years,
I've noticed that some folks are adamant that table names should be
singular; others are adamant they should be plural. Myself, I believe in the
plural school. I simply don't understand the argument that a table should be
named singular, as it holds a collection of entities. We have tables of
Customers, Contacts, Addresses, Quotes, Sales, etc. A column is AN attribute
of an entity, a row is AN instance of an entity. During development, when I
populate a list with Customers, the immediate thought is to use the
"Customers" table, not the "Customer" table. I'm interested in someone
convincing me otherwise.Earl (brikshoe@.newsgroups.nospam) writes:
> Trivia perhaps, but this has bothered me for a long time. Over the
> years, I've noticed that some folks are adamant that table names should
> be singular; others are adamant they should be plural. Myself, I believe
> in the plural school. I simply don't understand the argument that a
> table should be named singular, as it holds a collection of entities. We
> have tables of Customers, Contacts, Addresses, Quotes, Sales, etc. A
> column is AN attribute of an entity, a row is AN instance of an entity.
> During development, when I populate a list with Customers, the immediate
> thought is to use the "Customers" table, not the "Customer" table. I'm
> interested in someone convincing me otherwise.
I have a suggestion for a compromise! Let's use the Slovene for the table
names. Then the two camps can meet each other half-way and use the dual
number for table names!
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Earl wrote:

> Trivia perhaps, but this has bothered me for a long time. Over the years,
> I've noticed that some folks are adamant that table names should be
> singular; others are adamant they should be plural. Myself, I believe in t
he
> plural school. I simply don't understand the argument that a table should
be
> named singular, as it holds a collection of entities. We have tables of
> Customers, Contacts, Addresses, Quotes, Sales, etc. A column is AN attribu
te
> of an entity, a row is AN instance of an entity. During development, when
I
> populate a list with Customers, the immediate thought is to use the
> "Customers" table, not the "Customer" table. I'm interested in someone
> convincing me otherwise.
Naming conventions are important but their most important feature is
that they are applied consistently. I can live with either plural or
singular names as long as everyone sticks to the same convention. What
I don't want to see is one table called "Customer" and another in the
same project called "Contacts".
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||ROFL!
And maybe we can also introduce inflections to designate actions. :)
ML
http://milambda.blogspot.com/|||Hear Hear!
I also only like lower case and separate words with _ because you can run
into all sorts of problems if you develop on a case insensitive database and
then go to a case sensitive one. I was part of a big project that had that
exact problem and it convinced me to not use camel case and to use lower
case and underscore instead.
Once you get used to seeing upper case keywords and lower case identifiers
then its fine, bit like moving from ansi 89 to 92 on the join syntax - what
a pain that was!
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1138268495.874429.75640@.f14g2000cwb.googlegroups.com...
> Earl wrote:
>
> Naming conventions are important but their most important feature is
> that they are applied consistently. I can live with either plural or
> singular names as long as everyone sticks to the same convention. What
> I don't want to see is one table called "Customer" and another in the
> same project called "Contacts".
> --
> David Portas, SQL Server MVP
> Whenever possible please post enough code to reproduce your problem.
> Including CREATE TABLE and INSERT statements usually helps.
> State what version of SQL Server you are using and specify the content
> of any error messages.
> SQL Server Books Online:
> http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
> --
>|||Or perhaps use a Macromedia flash encoded object in a table name so that
when you look at it through a graphical tool it gives an animated
representation of what the table is :)
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"ML" <ML@.discussions.microsoft.com> wrote in message
news:29C18C9B-1411-48F8-A023-0111A9938EC8@.microsoft.com...
> ROFL!
> And maybe we can also introduce inflections to designate actions. :)
>
> ML
> --
> http://milambda.blogspot.com/|||Of course that's customizable - per user. And DBA's have a few extra
settings. :)
ML
http://milambda.blogspot.com/|||what!!! you don't like my naming system. LOL I am guilty of that.
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> wrote in message
news:1138268495.874429.75640@.f14g2000cwb.googlegroups.com...
> Earl wrote:
>
> Naming conventions are important but their most important feature is
> that they are applied consistently. I can live with either plural or
> singular names as long as everyone sticks to the same convention. What
> I don't want to see is one table called "Customer" and another in the
> same project called "Contacts".
> --
> David Portas, SQL Server MVP
> Whenever possible please post enough code to reproduce your problem.
> Including CREATE TABLE and INSERT statements usually helps.
> State what version of SQL Server you are using and specify the content
> of any error messages.
> SQL Server Books Online:
> http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
> --
>|||There doesn't seem to be any concensus one way or another on this issue of
singular vs. plural naming conventions.
However, when I see table names prefixed with "tbl", I can't help but
cringe, becuase I know there will be more stupidity in store when I start
looking at their choice of keys, T-SQL, and indexes (or lack thereof). :-)
"Earl" <brikshoe@.newsgroups.nospam> wrote in message
news:Oy2upQkIGHA.3192@.TK2MSFTNGP10.phx.gbl...
> Trivia perhaps, but this has bothered me for a long time. Over the years,
> I've noticed that some folks are adamant that table names should be
> singular; others are adamant they should be plural. Myself, I believe in
> the plural school. I simply don't understand the argument that a table
> should be named singular, as it holds a collection of entities. We have
> tables of Customers, Contacts, Addresses, Quotes, Sales, etc. A column is
> AN attribute of an entity, a row is AN instance of an entity. During
> development, when I populate a list with Customers, the immediate thought
> is to use the "Customers" table, not the "Customer" table. I'm interested
> in someone convincing me otherwise.
>|||On Thu, 26 Jan 2006 11:41:25 -0000, "Tony Rogerson"
<tonyrogerson@.sqlserverfaq.com> wrote:
in <#Qswf1mIGHA.3036@.tk2msftngp13.phx.gbl>

>Hear Hear!
>I also only like lower case and separate words with _ because you can run
>into all sorts of problems if you develop on a case insensitive database an
d
>then go to a case sensitive one. I was part of a big project that had that
>exact problem and it convinced me to not use camel case and to use lower
>case and underscore instead.
I'd have to agree with that as it simplifies moving your data in and out of
PostgreSQL. :-)
Stefan Berglund

Monday, March 26, 2012

Oh No This Is A Disaster

Ok, this is really bad.

A table has been accidentally DROPPED from our SQL Database...

We have the transaction log from the very day that this table was created...

However, as we are all C++ programmers (and not DB people!) we are not sure how to do this!

Could anyone explain how to recreate a table from a transaction log, perhaps up to a specific date - or point us to a resource on the web.

Thanks for any advice you can give. (Head now under the table, hiding from the management :mad:

Mark.Could you maybe restore from those old transaction logs (do you mean you have transaction log backups from the time the table was created)?
Notice that all tables, SPs etc will be restored then from the old times.|||Hi - thanks for the reply - In actual fact, the transaction log is complete - it has never been backed up...

In effect, I think what we need to do is replay the transaction log up until a certain point, BEFORE the table was accidentally dropped, thus, in effect 'rebuilding' the database (or maybe we can filter the transaction log for just one table?) from scratch.

We don't mind if we lose some data, say, up until the nearest check point before the accident, we can then manually re-enter the data...

Our current line of thinking is that we first need to backup the t/log (it will be a full backup since it's never been backed up) and then restore
it up until a certain point - however this is a programming shop and we're not really sure of the process/syntax required to do this - we just treat the server as a black box and pump data in/out with SQL Statements!!!

We're sinking fast!

The SQL server is SQL Server 7 - WE have just stopped the server and literally copied the disk files - just in case - we would like to take those files to a second box, so that we can work on it without causing any more damage(!) - Is there a method to do this?

Thanks again for any assistance you can offer.

Mark.|||Hi Mark,
I know that you can do a partial restore in SQl Server 2000 to a named mark in the transactional log but do not think this option is available to you with version 7. I just went searching on-line and found no evidence to counter my suspicions... Since you have never done a back-up of the transactional log, it is pretty useless to you. When is the last back-up you made for your database?

Friday, March 23, 2012

Odd timing problem MSAccess to SS2005

I have an SSIS package that takes data from a table in Access and puts it into a fact table in SS2005. Very little data manipulation is done. It processes approximately 1.5 million rows when it runs weekly. The process is run in an SSIS package that is called by a parent package, and all of that (including the use of the config files and accessing the parent variables) is working fine.

The issue is there is one field in the Access table that must be put into a different SS2005 fact table.

When I run the data flow task that loads the first fact table, it completes in less than two minutes. However, if I either (a) put a multicast step in the dataflow task to redirect a copy of the key data and remaining field to the second fact table, or (b) copy that step in the package to have it perform the same tasks with the different target (and using just the key and the remaining field), the execution time suddenly jumps to 30 minutes. In the case of (b), it remains true whether the copied step remains in the package or is executed in its own package, and also remains true if the package is loading against a table that starts out empty or with data already in it.

Has anyone ever bumped into a situation like this?

This could be because of a number of things, but I think the most likely are probably that the slow table is heavily indexed or has foreign keys being enforced. Also, make sure you're using fastload.|||

[banging head against desk]

Fast load ... that was it. Doggone it. Thanks for indulging a goof.

Wednesday, March 21, 2012

odd TEXT type field problem

Hi,
There is a db with 200GB, with ~160 GB data in it. Most of the data was a
TEXT type field in a table. That field has been dropped and added back with
default NULL but it appears I still have ~160 GB of data.
How is that possible?
updatestats would help?
The table in question has a two field composite clustered index, but not on
the TEXT type field.
The truth is the whole thing is very fragmented. Defragmenting the clustered
index would place exclusive lock on the table?
Your help would be appreciated,
JanosYou can try running DBCC CLEANTABLE on the table that had the TEXT column.
Look up syntax in BOL.
Adam Machanic
SQL Server MVP
http://www.sqljunkies.com/weblog/amachanic
--
"Janos Horanszky" <kob_uki@.hotmail.com> wrote in message
news:emmBIbZEFHA.3368@.TK2MSFTNGP10.phx.gbl...
> Hi,
> There is a db with 200GB, with ~160 GB data in it. Most of the data was a
> TEXT type field in a table. That field has been dropped and added back
with
> default NULL but it appears I still have ~160 GB of data.
> How is that possible?
> updatestats would help?
> The table in question has a two field composite clustered index, but not
on
> the TEXT type field.
> The truth is the whole thing is very fragmented. Defragmenting the
clustered
> index would place exclusive lock on the table?
> Your help would be appreciated,
> Janos
>|||I appreciate Adam, I will try it on our test environment, I let you know
asap.
Janos
"Adam Machanic" <amachanic@.hotmail._removetoemail_.com> wrote in message
news:eDFFq6dEFHA.1012@.TK2MSFTNGP14.phx.gbl...
> You can try running DBCC CLEANTABLE on the table that had the TEXT column.
> Look up syntax in BOL.
> --
> Adam Machanic
> SQL Server MVP
> http://www.sqljunkies.com/weblog/amachanic
> --
>
> "Janos Horanszky" <kob_uki@.hotmail.com> wrote in message
> news:emmBIbZEFHA.3368@.TK2MSFTNGP10.phx.gbl...
> with
> on
> clustered
>

Odd Sum() problem

Hi everyone, first time posting here.

I am having an odd problem with Crystal Reports XI. I have a small database table in Oracle, only has 8 records in it. I want to do a grand total for each column in the database (adding up all 8 records).

But when I use the Sum(fieldname) command, I get a number that is way off. The other commands like Maximum(fieldname) return the value I expect. But Sum is way off.

I am using the columns in a different part of the report, and the report is broken down into sections that match the different records of this database. What I need is a grand total for each column for the whole report.

I even tried a running total, and it returns the same number that is more than twice what it should be. There has to be something I am missing.

Thanks for the help. I am pretty new to .net and programming so I might have used the wrong terms for something.Is the detail section hidden or supressed? If so, have you checked to see if you are selecting distinct records? When you run te report what is the record count in the lower right corner of the screen? It should only be 8.

Odd query plan for view

I have a SQL 2000 table containing 2 million rows of Trade data. Here
are some of the columns:

[TradeId] INT IDENTITY(1,1) -- PK, non-clustered
[LoadDate] DATETIME -- clustered index
[TradeDate] DATETIME -- non-clustered index
[Symbol] VARCHAR(10)
[Account] VARCHAR(10)
[Position] INT
etc..

I have a view which performs a join against a security master table (to
gather more security data). The purpose of the view is to return all
the rows where [TradeDate] is within the last trading days.

The query against the view takes over around 30 minutes. When I view
the query plan, it is not using the index on the [TradeDate] column but
is instead using the clustered index on the [LoadDate] column... The
odd thing is, the [LoadDate] column is not used anywhere in the view!

For testing purposes, I decided to do a straight SELECT against the
table (minus the joins) and that one ALSO uses the clustered index scan
against a column not referenced anywhere in the query.

There is a reason why I have not posted my WHERE clause until now. The
reason is that I am doing what I think is a very inefficient clause:

WHERE [TradeDate] >= fGetTradeDateFromThreeDaysAgo(GetDate())

The function calculates the proper trade date based on the specified
date (in this case, the current date). It is my understanding that the
function will be called for all rows. (Which COULD explain the
performance issue...)

However, this view has been around for ages and never before caused any
sort of problems. The issue actually started the day after I had to
recreate the table. (I had to recreate the table because some columns
where added and others where renamed.)

On a side note, if I replace the WHERE clause with a hard-coded date
(as in 'WHERE [TradeDate] >= '20060324'), the query performs fine but
STILL uses the clustered index on the [LoadDate] column.(JayCallas@.hotmail.com) writes:
> The query against the view takes over around 30 minutes. When I view
> the query plan, it is not using the index on the [TradeDate] column but
> is instead using the clustered index on the [LoadDate] column... The
> odd thing is, the [LoadDate] column is not used anywhere in the view!

But "Clustered index scan" is just the same as "Table Scan". So it is
not very strange. No non-clustered index was good, so it scans the
index.

> There is a reason why I have not posted my WHERE clause until now. The
> reason is that I am doing what I think is a very inefficient clause:
> WHERE [TradeDate] >= fGetTradeDateFromThreeDaysAgo(GetDate())
> However, this view has been around for ages and never before caused any
> sort of problems. The issue actually started the day after I had to
> recreate the table. (I had to recreate the table because some columns
> where added and others where renamed.)

Statistics change, and old plan was not good any more. Yes, the above
is a problematic condition. Don't you read this newsgroup? :-) I
answered a very similar question last night.

You know something about the data that the optimizer does not. It
sees:

WHERE TradeDate > <UnknownValue
It estimates that it will hit 30% of the rows, a standard assumption.
And for 30% hit-rate a non-clustered index will be more expensive
than scanning the table.

This may be the place for an index hint See also the thread
"ranged datetime predicates & cardinality estimates" from yeaterday.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Thanks Erland for responding. As usual your posts are very helpful.

So, a Clustered Index Scan is as bad as a Table Scan? In this case,
what is it actually checking? The LoadDate is not even used in the
query so I am not sure what it is scanning for. Is it just using the
scan to look up the rest of the row?

As I already knew that the particular WHERE clause was bad, was I just
lucky all this time that the response time was good? Maybe because
there had not been any changes to the view or table in ages? Or maybe
because the plan was determined when there was a lot less data in the
table? Do query plans survive server reboots or restarting SQL?

Will take a look at the index hint any see how it goes.|||(JayCallas@.hotmail.com) writes:
> So, a Clustered Index Scan is as bad as a Table Scan? In this case,
> what is it actually checking? The LoadDate is not even used in the
> query so I am not sure what it is scanning for. Is it just using the
> scan to look up the rest of the row?

Let's say that you need to look up Michael Richardson in the telephone
book. Of course you open the book on R and quickly find him. You are seeking
the clustered index.

But say now that you are looking for someone whose first name is Jake,
and that he lives on Smallstreet, and you really need to find him. What
do you do? You read the phone book from start to end, that is you scan
the clustred index. The LastName, which is the key in the index is not
part of the search, but that is irrelevant.

> As I already knew that the particular WHERE clause was bad, was I just
> lucky all this time that the response time was good?

Bad is a little too strong a word. Problematic is more accurate.

The problem with a non-clustered index, is that if you get many hits,
and you for every hit you need to access the data page, you will do more
reads that you do, if you just scan the table from left to right.

>Do query plans survive server reboots or restarting SQL?

No. The plan is in cache only, and could also disappear during run-time,
if the plan is aged out.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||The hint worked. Thanks.

Odd Primary Key Error

Dear All,
I have been getting Primary key violation errors, however
I am a bit confused why. Here is the reason
We have a table with the following structure with the ID
field as the primary key.
CREATE TABLE [dbo].[tblTest] (
[ID] [int] IDENTITY (8, 20) NOT NULL , ...
and
ALTER TABLE [dbo].[tblTest] WITH NOCHECK ADD
CONSTRAINT [PK_tblTest] PRIMARY KEY CLUSTERED
(
[ID]
) WITH FILLFACTOR = 90 ON [PRIMARY]
We then use a store procedure to insert items into the
table. The store procedure does not access the ID field,
but it does return it after insert.
Using the IDENT_SEED, IDENT_INCR and IDENT_CURRENT
commands I have found the next key to be generated is
correct.
However we are starting to get intermediate primary key
violation errors.
Any pointers please.
JJulie
Did you try in the bottom of the sp adding return @.@.identity and the to
carry out on the client side?
"Julie" <anonymous@.discussions.microsoft.com> wrote in message
news:0a3601c3db48$8cabbf60$a301280a@.phx.gbl...
quote:

> Dear All,
> I have been getting Primary key violation errors, however
> I am a bit confused why. Here is the reason
> We have a table with the following structure with the ID
> field as the primary key.
> CREATE TABLE [dbo].[tblTest] (
> [ID] [int] IDENTITY (8, 20) NOT NULL , ...
> and
> ALTER TABLE [dbo].[tblTest] WITH NOCHECK ADD
> CONSTRAINT [PK_tblTest] PRIMARY KEY CLUSTERED
> (
> [ID]
> ) WITH FILLFACTOR = 90 ON [PRIMARY]
> We then use a store procedure to insert items into the
> table. The store procedure does not access the ID field,
> but it does return it after insert.
> Using the IDENT_SEED, IDENT_INCR and IDENT_CURRENT
> commands I have found the next key to be generated is
> correct.
> However we are starting to get intermediate primary key
> violation errors.
> Any pointers please.
> J
|||Almost at the end of every insert store procedure we have
a 'RETURN scope_identity()' command.
This is filtered back to the VB app though ado.
J
quote:

>--Original Message--
>Julie
> Did you try in the bottom of the sp adding return

@.@.identity and the to
quote:

>carry out on the client side?
>
>
>"Julie" <anonymous@.discussions.microsoft.com> wrote in

message
quote:

>news:0a3601c3db48$8cabbf60$a301280a@.phx.gbl...
however[QUOTE]
>
>.
>
|||Julie
Look , IDENTITY property does not guarantee sequence of insertion. There may
be gaps.
Can you provide sample data + code of your sp to reproduce your problem?
"Julie" <anonymous@.discussions.microsoft.com> wrote in message
news:0adf01c3db51$1b72ad00$a101280a@.phx.gbl...[QUOTE]
> Almost at the end of every insert store procedure we have
> a 'RETURN scope_identity()' command.
> This is filtered back to the VB app though ado.
> J
>
> @.@.identity and the to
> message
> however|||Hi Uri,
This has been sorted.
One of out developmers did something naughty. He has been
chastised ;)
J
quote:

>--Original Message--
>Julie
>Look , IDENTITY property does not guarantee sequence of

insertion. There may
quote:

>be gaps.
>Can you provide sample data + code of your sp to

reproduce your problem?
quote:

>
>
>"Julie" <anonymous@.discussions.microsoft.com> wrote in

message
quote:

>news:0adf01c3db51$1b72ad00$a101280a@.phx.gbl...
have[QUOTE]
the ID[QUOTE]
the[QUOTE]
field,[QUOTE]
key[QUOTE]
>
>.
>

Odd Primary Key Error

Dear All,
I have been getting Primary key violation errors, however
I am a bit confused why. Here is the reason
We have a table with the following structure with the ID
field as the primary key.
CREATE TABLE [dbo].[tblTest] (
[ID] [int] IDENTITY (8, 20) NOT NULL , ...
and
ALTER TABLE [dbo].[tblTest] WITH NOCHECK ADD
CONSTRAINT [PK_tblTest] PRIMARY KEY CLUSTERED
(
[ID]
) WITH FILLFACTOR = 90 ON [PRIMARY]
We then use a store procedure to insert items into the
table. The store procedure does not access the ID field,
but it does return it after insert.
Using the IDENT_SEED, IDENT_INCR and IDENT_CURRENT
commands I have found the next key to be generated is
correct.
However we are starting to get intermediate primary key
violation errors.
Any pointers please.
JJulie
Did you try in the bottom of the sp adding return @.@.identity and the to
carry out on the client side?
"Julie" <anonymous@.discussions.microsoft.com> wrote in message
news:0a3601c3db48$8cabbf60$a301280a@.phx.gbl...
> Dear All,
> I have been getting Primary key violation errors, however
> I am a bit confused why. Here is the reason
> We have a table with the following structure with the ID
> field as the primary key.
> CREATE TABLE [dbo].[tblTest] (
> [ID] [int] IDENTITY (8, 20) NOT NULL , ...
> and
> ALTER TABLE [dbo].[tblTest] WITH NOCHECK ADD
> CONSTRAINT [PK_tblTest] PRIMARY KEY CLUSTERED
> (
> [ID]
> ) WITH FILLFACTOR = 90 ON [PRIMARY]
> We then use a store procedure to insert items into the
> table. The store procedure does not access the ID field,
> but it does return it after insert.
> Using the IDENT_SEED, IDENT_INCR and IDENT_CURRENT
> commands I have found the next key to be generated is
> correct.
> However we are starting to get intermediate primary key
> violation errors.
> Any pointers please.
> J|||Almost at the end of every insert store procedure we have
a 'RETURN scope_identity()' command.
This is filtered back to the VB app though ado.
J
>--Original Message--
>Julie
> Did you try in the bottom of the sp adding return
@.@.identity and the to
>carry out on the client side?
>
>
>"Julie" <anonymous@.discussions.microsoft.com> wrote in
message
>news:0a3601c3db48$8cabbf60$a301280a@.phx.gbl...
>> Dear All,
>> I have been getting Primary key violation errors,
however
>> I am a bit confused why. Here is the reason
>> We have a table with the following structure with the ID
>> field as the primary key.
>> CREATE TABLE [dbo].[tblTest] (
>> [ID] [int] IDENTITY (8, 20) NOT NULL , ...
>> and
>> ALTER TABLE [dbo].[tblTest] WITH NOCHECK ADD
>> CONSTRAINT [PK_tblTest] PRIMARY KEY CLUSTERED
>> (
>> [ID]
>> ) WITH FILLFACTOR = 90 ON [PRIMARY]
>> We then use a store procedure to insert items into the
>> table. The store procedure does not access the ID field,
>> but it does return it after insert.
>> Using the IDENT_SEED, IDENT_INCR and IDENT_CURRENT
>> commands I have found the next key to be generated is
>> correct.
>> However we are starting to get intermediate primary key
>> violation errors.
>> Any pointers please.
>> J
>
>.
>|||Julie
Look , IDENTITY property does not guarantee sequence of insertion. There may
be gaps.
Can you provide sample data + code of your sp to reproduce your problem?
"Julie" <anonymous@.discussions.microsoft.com> wrote in message
news:0adf01c3db51$1b72ad00$a101280a@.phx.gbl...
> Almost at the end of every insert store procedure we have
> a 'RETURN scope_identity()' command.
> This is filtered back to the VB app though ado.
> J
>
> >--Original Message--
> >Julie
> > Did you try in the bottom of the sp adding return
> @.@.identity and the to
> >carry out on the client side?
> >
> >
> >
> >
> >"Julie" <anonymous@.discussions.microsoft.com> wrote in
> message
> >news:0a3601c3db48$8cabbf60$a301280a@.phx.gbl...
> >> Dear All,
> >> I have been getting Primary key violation errors,
> however
> >> I am a bit confused why. Here is the reason
> >>
> >> We have a table with the following structure with the ID
> >> field as the primary key.
> >>
> >> CREATE TABLE [dbo].[tblTest] (
> >> [ID] [int] IDENTITY (8, 20) NOT NULL , ...
> >>
> >> and
> >>
> >> ALTER TABLE [dbo].[tblTest] WITH NOCHECK ADD
> >> CONSTRAINT [PK_tblTest] PRIMARY KEY CLUSTERED
> >> (
> >> [ID]
> >> ) WITH FILLFACTOR = 90 ON [PRIMARY]
> >>
> >> We then use a store procedure to insert items into the
> >> table. The store procedure does not access the ID field,
> >> but it does return it after insert.
> >>
> >> Using the IDENT_SEED, IDENT_INCR and IDENT_CURRENT
> >> commands I have found the next key to be generated is
> >> correct.
> >>
> >> However we are starting to get intermediate primary key
> >> violation errors.
> >>
> >> Any pointers please.
> >>
> >> J
> >
> >
> >.
> >|||Hi Uri,
This has been sorted.
One of out developmers did something naughty. He has been
chastised ;)
J
>--Original Message--
>Julie
>Look , IDENTITY property does not guarantee sequence of
insertion. There may
>be gaps.
>Can you provide sample data + code of your sp to
reproduce your problem?
>
>
>"Julie" <anonymous@.discussions.microsoft.com> wrote in
message
>news:0adf01c3db51$1b72ad00$a101280a@.phx.gbl...
>> Almost at the end of every insert store procedure we
have
>> a 'RETURN scope_identity()' command.
>> This is filtered back to the VB app though ado.
>> J
>>
>> >--Original Message--
>> >Julie
>> > Did you try in the bottom of the sp adding return
>> @.@.identity and the to
>> >carry out on the client side?
>> >
>> >
>> >
>> >
>> >"Julie" <anonymous@.discussions.microsoft.com> wrote in
>> message
>> >news:0a3601c3db48$8cabbf60$a301280a@.phx.gbl...
>> >> Dear All,
>> >> I have been getting Primary key violation errors,
>> however
>> >> I am a bit confused why. Here is the reason
>> >>
>> >> We have a table with the following structure with
the ID
>> >> field as the primary key.
>> >>
>> >> CREATE TABLE [dbo].[tblTest] (
>> >> [ID] [int] IDENTITY (8, 20) NOT NULL , ...
>> >>
>> >> and
>> >>
>> >> ALTER TABLE [dbo].[tblTest] WITH NOCHECK ADD
>> >> CONSTRAINT [PK_tblTest] PRIMARY KEY CLUSTERED
>> >> (
>> >> [ID]
>> >> ) WITH FILLFACTOR = 90 ON [PRIMARY]
>> >>
>> >> We then use a store procedure to insert items into
the
>> >> table. The store procedure does not access the ID
field,
>> >> but it does return it after insert.
>> >>
>> >> Using the IDENT_SEED, IDENT_INCR and IDENT_CURRENT
>> >> commands I have found the next key to be generated is
>> >> correct.
>> >>
>> >> However we are starting to get intermediate primary
key
>> >> violation errors.
>> >>
>> >> Any pointers please.
>> >>
>> >> J
>> >
>> >
>> >.
>> >
>
>.
>sql

Odd pivot table type query

I've been unable to find a way to write the following
query.
Assuming this table:
Year Quarter
-- --
1990 1
1990 2
1990 3
1990 4
1991 1
1991 2
1991 3
1991 4
Does anyone know how to write an SQL query to generate the
following results?
Year Quarter
-- --
1990 1,2,3,4
1991 1,2,3,4
For documented method, you will have to make use of procedural code to
achieve this. See following example. OR you will have to use some 3rd party
tool to do it(www.rac4sql.com).
Eg:
-- sample table
-- if object_id('tab') is not null
-- drop table tab
create table tab(ID int,
SEQ_NUM int ,
ROUTE varchar(50))
go
-- sample data
insert into tab values(1 ,1 ,'AA')
insert into tab values(1 ,2 ,'BB')
insert into tab values(1 ,3 ,'CC')
insert into tab values(2 ,1 ,'AA')
insert into tab values(3 ,1 ,'VV')
insert into tab values(3 ,2 ,'XX')
go
-- t-sql code for generating report
if object_id('tempdb..#tmp') is not null
drop table #tmp
create table #tmp(id int, tmpval varchar(50))
go
declare @.id int
declare @.seq_num int
declare @.route varchar(50), @.f_route varchar(50)
select @.id=0, @.seq_num=0, @.route='', @.f_route=''
while @.id is not null
begin
select @.id=min(id) from tab where id > @.id
while @.seq_num is not null
begin
select @.seq_num=min(seq_num), @.route=min(route)from tab where id = @.id
and seq_num > @.seq_num
If @.seq_num is null and @.id is not null
insert into #tmp values(@.id, @.f_route)
select @.f_route = @.f_route + case @.f_route when '' then '' else ','
end + @.route
end
select @.seq_num=0, @.f_route=''
end
select * from #tmp
truncate table #tmp
Vishal Parkar
vgparkar@.yahoo.co.in | vgparkar@.hotmail.com
|||Thanks for the response Vishal. Unfortunatly this
solution isn't going to work for me since I dealing with a
very large number of records the performace of populating
a new table is going to be too great.
I'm adding a "download" feature for the data so this would
result is a large number of records being retuned and
written to file.
What would be great is is I could use somethine like a
GROUP BY and SUM where the SUM would append strings
together.
SELECT year, APPEND(Quarter)
FROM mytable
GROUP BY year

>--Original Message--
>For documented method, you will have to make use of
procedural code to
>achieve this. See following example. OR you will have to
use some 3rd party
>tool to do it(www.rac4sql.com).
>Eg:
>-- sample table
>-- if object_id('tab') is not null
>-- drop table tab
>create table tab(ID int,
>SEQ_NUM int ,
>ROUTE varchar(50))
>go
>-- sample data
>insert into tab values
(1 ,1 ,'AA')
>insert into tab values
(1 ,2 ,'BB')
>insert into tab values
(1 ,3 ,'CC')
>insert into tab values
(2 ,1 ,'AA')
>insert into tab values
(3 ,1 ,'VV')
>insert into tab values
(3 ,2 ,'XX')
>go
>-- t-sql code for generating report
>if object_id('tempdb..#tmp') is not null
>drop table #tmp
>create table #tmp(id int, tmpval varchar(50))
>go
>declare @.id int
>declare @.seq_num int
>declare @.route varchar(50), @.f_route varchar(50)
>select @.id=0, @.seq_num=0, @.route='', @.f_route=''
>while @.id is not null
>begin
> select @.id=min(id) from tab where id > @.id
> while @.seq_num is not null
> begin
> select @.seq_num=min(seq_num), @.route=min(route)from
tab where id = @.id
> and seq_num > @.seq_num
> If @.seq_num is null and @.id is not null
> insert into #tmp values(@.id, @.f_route)
> select @.f_route = @.f_route + case @.f_route when ''
then '' else ','
>end + @.route
> end
> select @.seq_num=0, @.f_route=''
>end
>select * from #tmp
>truncate table #tmp
>--
>Vishal Parkar
>vgparkar@.yahoo.co.in | vgparkar@.hotmail.com
>
>.
>
|||Shawn,
There is no function as such which will do the things for you.
You may try following approach using User defined function. But remember,
this is not a documented method, so it can not be reliable under all
cirumstances.
CREATE FUNCTION EmpPhones (@.ID INT)
RETURNS VARCHAR(8000)
AS
BEGIN
DECLARE @.str VARCHAR(1000)
SELECT @.str = ISNULL(@.str + ',', '') + cats
FROM tab WHERE id = @.ID
RETURN (@.str)
END
-- sample data / result set.
if object_id ('tab') is not null
drop table tab
go
create table tab(ID int,
cats varchar(50))
go
insert into tab values(1 ,'1-001')
insert into tab values(1 ,'2-002')
insert into tab values(1 ,'3-003')
insert into tab values(2 ,'1-011')
insert into tab values(3 ,'1-012')
insert into tab values(3 ,'2-022')
go
--And then you would call this UDF from within a SELECT statement, as
follows:
select distinct id,dbo.empphones(id) 'comma seperated value' from tab
Vishal Parkar
vgparkar@.yahoo.co.in | vgparkar@.hotmail.com

Tuesday, March 20, 2012

odd Indexing errors

I have been getting odd errors concerning indexes. When im trying to insert into a table it will post an error
index number_____ belongs to table x not table y
this error causes a failure in processing the code. If I go to the table in enterprise manager and open it and then close it, it seems to reassociate the indexes with the table and then there are no problems. Does anyone know why the indexes are getting confused?
JimDid you DBCC CHECKTABLE?|||Or dbcc dbreindex.|||Originally posted by rnealejr
Or dbcc dbreindex.

Any idea why the indexes are getting corrupted?
Jim|||Can you post your insert statements and some DDL?|||is this the message you're getting?

Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'.|||Originally posted by ms_sql_dba
is this the message you're getting?

Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'.

yes that is exactly the mesg
Jim|||I don't know if you read this post from microsoft:

link (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/trblsql/tr_reslsyserr_1_23lh.asp)

Also, I would run some hardware diagnostics on your hard disks.|||Originally posted by rnealejr
I don't know if you read this post from microsoft:

link (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/trblsql/tr_reslsyserr_1_23lh.asp)

Also, I would run some hardware diagnostics on your hard disks.

I have 1.3 terabites of disk space...what diagnostics would you recomend?
Jim|||Originally posted by rnealejr
I don't know if you read this post from microsoft:

link (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/trblsql/tr_reslsyserr_1_23lh.asp)

Also, I would run some hardware diagnostics on your hard disks.

DBCC CHECKDB ('maersk data warehouse') returned no errors...maybe i have write caching on...thats the only other thing that I can think of.
Jim

odd deadlocking behaviour

I am running SQL Server 2000 SP3a on a single processor computer and:
I have a table with the following columns
CREATE TABLE [dbo].[SP148_JOB_AHO_PERIODS] (
[JOB_NO] [int] NOT NULL ,
[OVERRIDDEN_PERIOD_START] [datetime] NULL ,
[OVERRIDDEN_PERIOD_END] [datetime] NULL
) ON [PRIMARY]
GO
That has the following data:
1,06/01/2003 17:00:00,NULL
2,NULL,13/01/2003 08:00:00
2,13/01/2003 17:00:00,NULL
3,NULL,20/01/2003 08:00:00
3,20/01/2003 17:00:00,NULL
4,NULL, 27/01/2003 08:00:00
4,27/01/2003 17:00:00,NULL
If I now try deleting these rows from two database sessions in the order as
seen below I end up with a deadlock:
Firstly on SPID 56:
set implicit_transactions on
delete from SP148_JOB_AHO_PERIODS where job_no = 2
Secondly on SPID 57:
set implicit_transactions on
delete from SP148_JOB_AHO_PERIODS where job_no = 1
... This then blocks ? Which I am not sure about.
Thirdly back on SPID 56:
delete from SP148_JOB_AHO_PERIODS where job_no = 6
Creates a deadlock victimising SPID 57.
I accept that this table does not have a primary key but when I add one it
still deadlocks.
Why, on step 2, does SPID 57 lock when I delete the row where job_no = 1?
Why does a deadlock occur when SPID 56 subsequently deletes the row where
job_no is 6 i.e. no rows?
When I looked at sp_lock output it appeared that both sessions were waiting
on the same RID, does this mean that a RID is not necessarily 1 database
table row?
Thanks- What kind of lock triggered the deadlock? row, index, page, extent, table,
database
- Do you have an index by "job_no"?
AMB
"Tony Jones" wrote:

> I am running SQL Server 2000 SP3a on a single processor computer and:
> I have a table with the following columns
> CREATE TABLE [dbo].[SP148_JOB_AHO_PERIODS] (
> [JOB_NO] [int] NOT NULL ,
> [OVERRIDDEN_PERIOD_START] [datetime] NULL ,
> [OVERRIDDEN_PERIOD_END] [datetime] NULL
> ) ON [PRIMARY]
> GO
> That has the following data:
> 1,06/01/2003 17:00:00,NULL
> 2,NULL,13/01/2003 08:00:00
> 2,13/01/2003 17:00:00,NULL
> 3,NULL,20/01/2003 08:00:00
> 3,20/01/2003 17:00:00,NULL
> 4,NULL, 27/01/2003 08:00:00
> 4,27/01/2003 17:00:00,NULL
> If I now try deleting these rows from two database sessions in the order a
s
> seen below I end up with a deadlock:
> Firstly on SPID 56:
> set implicit_transactions on
> delete from SP148_JOB_AHO_PERIODS where job_no = 2
> Secondly on SPID 57:
> set implicit_transactions on
> delete from SP148_JOB_AHO_PERIODS where job_no = 1
> ... This then blocks ? Which I am not sure about.
> Thirdly back on SPID 56:
> delete from SP148_JOB_AHO_PERIODS where job_no = 6
> Creates a deadlock victimising SPID 57.
> I accept that this table does not have a primary key but when I add one it
> still deadlocks.
> Why, on step 2, does SPID 57 lock when I delete the row where job_no = 1?
> Why does a deadlock occur when SPID 56 subsequently deletes the row where
> job_no is 6 i.e. no rows?
> When I looked at sp_lock output it appeared that both sessions were waitin
g
> on the same RID, does this mean that a RID is not necessarily 1 database
> table row?
> Thanks|||Turn on trace flag 1204 and -1. Then your answers will be in the errorlog.
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:5E921D65-22E3-40D0-AE4C-BEBF76DF8FC7@.microsoft.com...[vbcol=seagreen]
>- What kind of lock triggered the deadlock? row, index, page, extent,
>table,
> database
> - Do you have an index by "job_no"?
> AMB
> "Tony Jones" wrote:
>|||The blocked SPID 57 is waiting on a RID (so a row lock) both SPIDs are
waiting on the same rid i.e. the same fileid:pageid:slot, hence the question
is the RID locking more than one row?
No I have not had a index on job_no I have tried a primary kry on a new
column defined to be the Identity column. This produced the same deadlock
behaviour.
"Alejandro Mesa" wrote:
[vbcol=seagreen]
> - What kind of lock triggered the deadlock? row, index, page, extent, tabl
e,
> database
> - Do you have an index by "job_no"?
> AMB
> "Tony Jones" wrote:
>

odd deadlocking behaviour

I am running SQL Server 2000 SP3a on a single processor computer and:
I have a table with the following columns
CREATE TABLE [dbo].[SP148_JOB_AHO_PERIODS] (
[JOB_NO] [int] NOT NULL ,
[OVERRIDDEN_PERIOD_START] [datetime] NULL ,
[OVERRIDDEN_PERIOD_END] [datetime] NULL
) ON [PRIMARY]
GO
That has the following data:
1,06/01/2003 17:00:00,NULL
2,NULL,13/01/2003 08:00:00
2,13/01/2003 17:00:00,NULL
3,NULL,20/01/2003 08:00:00
3,20/01/2003 17:00:00,NULL
4,NULL, 27/01/2003 08:00:00
4,27/01/2003 17:00:00,NULL
If I now try deleting these rows from two database sessions in the order as
seen below I end up with a deadlock:
Firstly on SPID 56:
set implicit_transactions on
delete from SP148_JOB_AHO_PERIODS where job_no = 2
Secondly on SPID 57:
set implicit_transactions on
delete from SP148_JOB_AHO_PERIODS where job_no = 1
... This then blocks ? Which I am not sure about.
Thirdly back on SPID 56:
delete from SP148_JOB_AHO_PERIODS where job_no = 6
Creates a deadlock victimising SPID 57.
I accept that this table does not have a primary key but when I add one it
still deadlocks.
Why, on step 2, does SPID 57 lock when I delete the row where job_no = 1?
Why does a deadlock occur when SPID 56 subsequently deletes the row where
job_no is 6 i.e. no rows?
When I looked at sp_lock output it appeared that both sessions were waiting
on the same RID, does this mean that a RID is not necessarily 1 database
table row?
Thanks
- What kind of lock triggered the deadlock? row, index, page, extent, table,
database
- Do you have an index by "job_no"?
AMB
"Tony Jones" wrote:

> I am running SQL Server 2000 SP3a on a single processor computer and:
> I have a table with the following columns
> CREATE TABLE [dbo].[SP148_JOB_AHO_PERIODS] (
> [JOB_NO] [int] NOT NULL ,
> [OVERRIDDEN_PERIOD_START] [datetime] NULL ,
> [OVERRIDDEN_PERIOD_END] [datetime] NULL
> ) ON [PRIMARY]
> GO
> That has the following data:
> 1,06/01/2003 17:00:00,NULL
> 2,NULL,13/01/2003 08:00:00
> 2,13/01/2003 17:00:00,NULL
> 3,NULL,20/01/2003 08:00:00
> 3,20/01/2003 17:00:00,NULL
> 4,NULL, 27/01/2003 08:00:00
> 4,27/01/2003 17:00:00,NULL
> If I now try deleting these rows from two database sessions in the order as
> seen below I end up with a deadlock:
> Firstly on SPID 56:
> set implicit_transactions on
> delete from SP148_JOB_AHO_PERIODS where job_no = 2
> Secondly on SPID 57:
> set implicit_transactions on
> delete from SP148_JOB_AHO_PERIODS where job_no = 1
> ... This then blocks ? Which I am not sure about.
> Thirdly back on SPID 56:
> delete from SP148_JOB_AHO_PERIODS where job_no = 6
> Creates a deadlock victimising SPID 57.
> I accept that this table does not have a primary key but when I add one it
> still deadlocks.
> Why, on step 2, does SPID 57 lock when I delete the row where job_no = 1?
> Why does a deadlock occur when SPID 56 subsequently deletes the row where
> job_no is 6 i.e. no rows?
> When I looked at sp_lock output it appeared that both sessions were waiting
> on the same RID, does this mean that a RID is not necessarily 1 database
> table row?
> Thanks
|||Turn on trace flag 1204 and -1. Then your answers will be in the errorlog.
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:5E921D65-22E3-40D0-AE4C-BEBF76DF8FC7@.microsoft.com...[vbcol=seagreen]
>- What kind of lock triggered the deadlock? row, index, page, extent,
>table,
> database
> - Do you have an index by "job_no"?
> AMB
> "Tony Jones" wrote:
|||The blocked SPID 57 is waiting on a RID (so a row lock) both SPIDs are
waiting on the same rid i.e. the same fileid:pageid:slot, hence the question
is the RID locking more than one row?
No I have not had a index on job_no I have tried a primary kry on a new
column defined to be the Identity column. This produced the same deadlock
behaviour.
"Alejandro Mesa" wrote:
[vbcol=seagreen]
> - What kind of lock triggered the deadlock? row, index, page, extent, table,
> database
> - Do you have an index by "job_no"?
> AMB
> "Tony Jones" wrote:

odd deadlocking behaviour

I am running SQL Server 2000 SP3a on a single processor computer and:
I have a table with the following columns
CREATE TABLE [dbo].[SP148_JOB_AHO_PERIODS] (
[JOB_NO] [int] NOT NULL ,
[OVERRIDDEN_PERIOD_START] [datetime] NULL ,
[OVERRIDDEN_PERIOD_END] [datetime] NULL
) ON [PRIMARY]
GO
That has the following data:
1,06/01/2003 17:00:00,NULL
2,NULL,13/01/2003 08:00:00
2,13/01/2003 17:00:00,NULL
3,NULL,20/01/2003 08:00:00
3,20/01/2003 17:00:00,NULL
4,NULL, 27/01/2003 08:00:00
4,27/01/2003 17:00:00,NULL
If I now try deleting these rows from two database sessions in the order as
seen below I end up with a deadlock:
Firstly on SPID 56:
set implicit_transactions on
delete from SP148_JOB_AHO_PERIODS where job_no = 2
Secondly on SPID 57:
set implicit_transactions on
delete from SP148_JOB_AHO_PERIODS where job_no = 1
... This then blocks ? Which I am not sure about.
Thirdly back on SPID 56:
delete from SP148_JOB_AHO_PERIODS where job_no = 6
Creates a deadlock victimising SPID 57.
I accept that this table does not have a primary key but when I add one it
still deadlocks.
Why, on step 2, does SPID 57 lock when I delete the row where job_no = 1?
Why does a deadlock occur when SPID 56 subsequently deletes the row where
job_no is 6 i.e. no rows?
When I looked at sp_lock output it appeared that both sessions were waiting
on the same RID, does this mean that a RID is not necessarily 1 database
table row?
Thanks- What kind of lock triggered the deadlock? row, index, page, extent, table,
database
- Do you have an index by "job_no"?
AMB
"Tony Jones" wrote:
> I am running SQL Server 2000 SP3a on a single processor computer and:
> I have a table with the following columns
> CREATE TABLE [dbo].[SP148_JOB_AHO_PERIODS] (
> [JOB_NO] [int] NOT NULL ,
> [OVERRIDDEN_PERIOD_START] [datetime] NULL ,
> [OVERRIDDEN_PERIOD_END] [datetime] NULL
> ) ON [PRIMARY]
> GO
> That has the following data:
> 1,06/01/2003 17:00:00,NULL
> 2,NULL,13/01/2003 08:00:00
> 2,13/01/2003 17:00:00,NULL
> 3,NULL,20/01/2003 08:00:00
> 3,20/01/2003 17:00:00,NULL
> 4,NULL, 27/01/2003 08:00:00
> 4,27/01/2003 17:00:00,NULL
> If I now try deleting these rows from two database sessions in the order as
> seen below I end up with a deadlock:
> Firstly on SPID 56:
> set implicit_transactions on
> delete from SP148_JOB_AHO_PERIODS where job_no = 2
> Secondly on SPID 57:
> set implicit_transactions on
> delete from SP148_JOB_AHO_PERIODS where job_no = 1
> ... This then blocks ? Which I am not sure about.
> Thirdly back on SPID 56:
> delete from SP148_JOB_AHO_PERIODS where job_no = 6
> Creates a deadlock victimising SPID 57.
> I accept that this table does not have a primary key but when I add one it
> still deadlocks.
> Why, on step 2, does SPID 57 lock when I delete the row where job_no = 1?
> Why does a deadlock occur when SPID 56 subsequently deletes the row where
> job_no is 6 i.e. no rows?
> When I looked at sp_lock output it appeared that both sessions were waiting
> on the same RID, does this mean that a RID is not necessarily 1 database
> table row?
> Thanks|||Turn on trace flag 1204 and -1. Then your answers will be in the errorlog.
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:5E921D65-22E3-40D0-AE4C-BEBF76DF8FC7@.microsoft.com...
>- What kind of lock triggered the deadlock? row, index, page, extent,
>table,
> database
> - Do you have an index by "job_no"?
> AMB
> "Tony Jones" wrote:
>> I am running SQL Server 2000 SP3a on a single processor computer and:
>> I have a table with the following columns
>> CREATE TABLE [dbo].[SP148_JOB_AHO_PERIODS] (
>> [JOB_NO] [int] NOT NULL ,
>> [OVERRIDDEN_PERIOD_START] [datetime] NULL ,
>> [OVERRIDDEN_PERIOD_END] [datetime] NULL
>> ) ON [PRIMARY]
>> GO
>> That has the following data:
>> 1,06/01/2003 17:00:00,NULL
>> 2,NULL,13/01/2003 08:00:00
>> 2,13/01/2003 17:00:00,NULL
>> 3,NULL,20/01/2003 08:00:00
>> 3,20/01/2003 17:00:00,NULL
>> 4,NULL, 27/01/2003 08:00:00
>> 4,27/01/2003 17:00:00,NULL
>> If I now try deleting these rows from two database sessions in the order
>> as
>> seen below I end up with a deadlock:
>> Firstly on SPID 56:
>> set implicit_transactions on
>> delete from SP148_JOB_AHO_PERIODS where job_no = 2
>> Secondly on SPID 57:
>> set implicit_transactions on
>> delete from SP148_JOB_AHO_PERIODS where job_no = 1
>> ... This then blocks ? Which I am not sure about.
>> Thirdly back on SPID 56:
>> delete from SP148_JOB_AHO_PERIODS where job_no = 6
>> Creates a deadlock victimising SPID 57.
>> I accept that this table does not have a primary key but when I add one
>> it
>> still deadlocks.
>> Why, on step 2, does SPID 57 lock when I delete the row where job_no = 1?
>> Why does a deadlock occur when SPID 56 subsequently deletes the row where
>> job_no is 6 i.e. no rows?
>> When I looked at sp_lock output it appeared that both sessions were
>> waiting
>> on the same RID, does this mean that a RID is not necessarily 1 database
>> table row?
>> Thanks|||The blocked SPID 57 is waiting on a RID (so a row lock) both SPIDs are
waiting on the same rid i.e. the same fileid:pageid:slot, hence the question
is the RID locking more than one row?
No I have not had a index on job_no I have tried a primary kry on a new
column defined to be the Identity column. This produced the same deadlock
behaviour.
"Alejandro Mesa" wrote:
> - What kind of lock triggered the deadlock? row, index, page, extent, table,
> database
> - Do you have an index by "job_no"?
> AMB
> "Tony Jones" wrote:
> > I am running SQL Server 2000 SP3a on a single processor computer and:
> >
> > I have a table with the following columns
> > CREATE TABLE [dbo].[SP148_JOB_AHO_PERIODS] (
> > [JOB_NO] [int] NOT NULL ,
> > [OVERRIDDEN_PERIOD_START] [datetime] NULL ,
> > [OVERRIDDEN_PERIOD_END] [datetime] NULL
> > ) ON [PRIMARY]
> > GO
> >
> > That has the following data:
> >
> > 1,06/01/2003 17:00:00,NULL
> > 2,NULL,13/01/2003 08:00:00
> > 2,13/01/2003 17:00:00,NULL
> > 3,NULL,20/01/2003 08:00:00
> > 3,20/01/2003 17:00:00,NULL
> > 4,NULL, 27/01/2003 08:00:00
> > 4,27/01/2003 17:00:00,NULL
> >
> > If I now try deleting these rows from two database sessions in the order as
> > seen below I end up with a deadlock:
> >
> > Firstly on SPID 56:
> > set implicit_transactions on
> > delete from SP148_JOB_AHO_PERIODS where job_no = 2
> >
> > Secondly on SPID 57:
> > set implicit_transactions on
> > delete from SP148_JOB_AHO_PERIODS where job_no = 1
> > ... This then blocks ? Which I am not sure about.
> >
> > Thirdly back on SPID 56:
> > delete from SP148_JOB_AHO_PERIODS where job_no = 6
> >
> > Creates a deadlock victimising SPID 57.
> >
> > I accept that this table does not have a primary key but when I add one it
> > still deadlocks.
> >
> > Why, on step 2, does SPID 57 lock when I delete the row where job_no = 1?
> > Why does a deadlock occur when SPID 56 subsequently deletes the row where
> > job_no is 6 i.e. no rows?
> >
> > When I looked at sp_lock output it appeared that both sessions were waiting
> > on the same RID, does this mean that a RID is not necessarily 1 database
> > table row?
> >
> > Thanks

Odd date data found in table

Hi all, this is REALLY weird, I can t seem to make heads or tails of it. but from my understanding the each datatype has set contrstraint assigned to it ( example int datatype can only except non-decimal numeric values ) as well as a datetime datatype can only except a vaild date.

Though oddly i have found the following dates in a table ( in addtion the when aby kind of data operation is performed on the table sql returns the following error:

Server: Msg 8630, Level 16, State 1, Line 1

Internal Query Processor Error: The query processor encountered an unexpected error during execution.

Here are some of the dates that were present in the table:

DateColumn1

--

1900-01-01 857:44:45.813

1900-01-01 872:51:16.427

1900-01-01 872:54:57.440

1900-01-01 873:09:32.107

1900-01-01 873:13:10.560

1900-01-01 873:16:49.867

1900-01-01 888:27:00.640

DateColumn2

-22063-05-18 00:00:00.000

-20285-02-03 00:00:00.000

In addtion there were some other columns that have had odd data in them(VERY WEIRD)

Char25Column

?Q307000
?Q307000
?Q307000

?Q307000

?Q307000

Any Thoughs?

Thanks

Have you run a DBCC CHECKDB lately? You might have some issue with your hardware, storage, or perhaps some 3rd party filter that's somehow modifying your data.

What version of SQL Server is this? If it is SQL Server 2005 hopefully you have database page checksums enabled so that if something does scribble on database pages, then the changes would be detected the next time you read those pages.

Do you have any XPs (extended stored procedures)? A poorly written/tested XP might scribble across SQL Server's memory.

Check your overall system and consider a call to product support. Also, you probably want to check your recent backups to see if the errors occur there as well.


Don

|||

Hi Don, thanks for the quick reply,to answer your question, i have ran DBCC table checks on the table in question and as expected consistancy error were found.( i would have posted them before but I tried to make the post a readable as possible.) (i'll post them below) the odd thing here is the fact that SQL allowed the corrupted data to be inserted in the first place. i mean you would think that SQL servver would throw up an error or two.

As far as the extended sp yes but they wouldnt have been called during the this data input.(btw was push in via informatic) then again reguardless of the client thats pushing and puling the data, SQL server should have had the last called as to what data got commited.

Here are the dbcc table check results( i have ommited a lot of the extract error that were repeated /per row)

DBCC results for 'tLoadedTable'.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "PaidAmt" value is out of range for data type "decimal". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "PaidDt" value is out of range for data type "datetime". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "InvoiceDt" value is out of range for data type "datetime". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "LoadDate" value is out of range for data type "datetime". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Table error: object ID 1762977507, index ID 0, partition ID 72057594677035008. A row should be on partition number 1 but was found in partition number 10. Possible extra or invalid keys for:

Msg 8988, Level 16, State 1, Line 1

Row (85:10:0) identified by (HEAP RID = (85:10:0)).

Too many errors found (201) for object ID 1762977507. To see all error messages rerun the statement using "WITH ALL_ERRORMSGS".

There are 133508601 rows in 9102393 pages for object "tLoadedTable".

CHECKTABLE found 0 allocation errors and 336633 consistency errors in table 'tLoadedTable' (object ID 1762977507).

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Thanks

|||

Im running SQL server 2005 Enterprise SP 2, in addtion this has also happened on another database on a different server also running SQL 2005 SP 2 on a different SAN ( same client though)

Thanks

|||

Don, sorry I missed your question about the page verify mode, currently i running all databases in torn_page_verify mode, were mostly a OLAP shop and dont have a lot of rouge client make data modifications. though based on this little bout i may be changing it to page checksums ( though there seems to be an considerable overhead due to the constaint checksum verifications right? )

Thanks

|||

Since DBCC has detected over 360,000 anomalous data values it's quite likely that you have hardware/storage issues. I suggest you get that looked at before you make any further changes to the data.

It may be that you have a wayward XP that's scribbling outside its intended address space. An XP can write anywhere in SQL Server's writable memory, so even if an XP wasn't supposed to be writing on these pages, it might be doing so accidentally. Of course, a well-written and well-tested XP should do this, but it could have a bug or two that's pointing it in the wrong direction. There's nothing that SQL Server can do to keep your XP from scribbling where it shouldn't, other than replacing XPs with CLR modules as we've enabled with SS2005.

At least page checksums can detect that what was read is not the same as what was written. In your case it would detect these issues if they were caused by the IO system. Torn-page detection might detect an IO problem here, but only if the IO system happened to flip some bits near the torn-page bits on a page (there are only 32 of these bits on an 8K page, so the IO problem could well miss those).

And yes, there is some overhead for the page checksums, but you also would have detected this issue much sooner (if indeed it is an IO issue). You'd have to do some tests to see what the overhead would be for your system; it's hard to predict as it depends on cache hit probabilities and other factors .. but often people find it to be less than they expect .. or they're willing to "pay the price" for it since it will detect issues with the IO/storage system.

So: I suggest you investigate your IO system for issues (check out sqliosim and use it to test your system), and that you double- triple check any XPs to make sure they aren't generating those strings of bytes that you're finding in your dates and decimals (check for any strings/varchars as well .. perhaps it's writing a string of readable characters that you'll be able to use to track down the source of the issues).

Don

|||

Hi Don,

Again thanks for the reply.

As far as IO error there were none, ( funny you'd mentioned the sqliosim utility i had ran a check earlier in the week with no errors found) another thing is that this issue is occurring on two separate servers that also reside on separate Sans, oddly the issue occurred on the exact same partition boundary. as it sits I have a trace running on the entire process and found no evidence that any xp or clr routines were being called. ( just the simple sp_prepare and execute calls for table collation definition information. later followed with a API insert bulk process" that shows no errors during the loading)

Well I have been able to re-produce the issue yet its very weird.

Ok, so if I hadn’t mentioned it before the table in question is partitioned. its partition scheme is also shared by other table ( yet they don’t have the same issue.) the data load is being pushed in via infromatioca API insert bulk call. regardless of which partitioned key is used is still allows the records to be entered yet with a dbcc checktable or simple select is performed on the table it fails and returns the above error.

--

I attempting to loading to other partitioned boundaries and some succeeded with no corruption other succeeded with corruption. the thing that really bothers me is that SQL is allowing this errors to be committed. I have checked the input data and the errors are not there so SQL server has to be corrupting them when the insert bulk block is read into memory.(yet again you would think SQL server would be checking it here)

--test 1

I created another table with the same structure with in the same database but did not add it to a partition scheme( just left it on the primary file group( rather PrimaryData file group, I keep the system objects and user object separate) I ran the same ETL loading process into this new table with no errors and the data was clean( I was able to select against it with no issues and the data look good as expected.)

--test 2

I created another database on the built the same table with partitioned scheme and function within it ( so basically the environment is exactly the same as production minus all other table and the extra data within the partitioned table.)

I ran the same ETL load process against the new database... and it loaded fine, no error no corruption.. ughhh!!!

-- additional

I have ran dbcc dbcheck numerous time yet no errors are even returned. i have placed a call with MS but they seem to think its the third party tool that causing the issue ( though I’m not convinced due to I’m able to load into a non-partitioned table and a partitioned table another database on the same server same disk with no issues.)

YIKES!!!!

Thanks

|||

Yes, it does seem very odd. It would seem that even though a 3rd party API was being used, it shouldn't be able to insert bad values into a datatype. That's why I suspect either an IO issue or an XP issue since the bytes seem be be changed outside of SQL Server's actual control.

I suggest you pursue this some more with MS support together with the 3rd party .. perhaps the 3rd party API is using some XP of its own (some XPs are named sp_xxx, so that can seem like no XPs are being called if you're expecting xp_xxx).

Don

|||

Hi Don, Thanks again for your reply,

I have had my SAN admin review the SAN log to look for any kind latency or stale IO issues ( so far he has been able to find anything. I my self haven’t found anything at the OS or SQL level that you suggest IO or memory issues...) very odd. I have though been able to find out that the insert bulk API that the third party vendor had been using was indeed a odbc call to the bcp.exe, funny though i have since the data being submitted ( I’ve captured the network packets to review the data that was being sent to the bcp process, all is well so the corrupted data is not coming from the third party tool ( potentially its could be the memory block that the odbc driver has consumed when the connection was made to the api. perhaps sql is indeed seeing the correct data value verifying that they are compliant and during the commit process of writing the data to physical disk the memory block is being corrupted by some low level IO driver filter.

In addition after reviewing the trace( I know not all events are trapped by SQL server) again the only operations that are called were the sp_prepare and sp_execute which were unprepared before the api called was made. also in reviewing the third party tools model it show no other sql server functions are called ( just internal application calls)

One other thing was weird, the fact that I was able to load into a non-partitioned table with the same load process (third party tool) as well as another database partitioned table (same scheme, same function, same disk, same server) without data corruption.

Who knows? I’m still perusing the MS avenue and will post my findings( so far it my been a MS bug but there escalation team isn’t in until Monday.)

Meanwhile I have been attempting to read up on driver filters (interesting enough they can modify or FILTER data prior to persisting the data ( i guess it happens in the transfer process from memory to bus to disk.)

Though if anyone has any thought please do tell...

Thanks

|||

Could you send me an email so we can look into this a bit more? DonV@.microsoft.com.(nospam)

Don

Odd date data found in table

Hi all, this is REALLY weird, I can t seem to make heads or tails of it. but from my understanding the each datatype has set contrstraint assigned to it ( example int datatype can only except non-decimal numeric values ) as well as a datetime datatype can only except a vaild date.

Though oddly i have found the following dates in a table ( in addtion the when aby kind of data operation is performed on the table sql returns the following error:

Server: Msg 8630, Level 16, State 1, Line 1

Internal Query Processor Error: The query processor encountered an unexpected error during execution.

Here are some of the dates that were present in the table:

DateColumn1

--

1900-01-01 857:44:45.813

1900-01-01 872:51:16.427

1900-01-01 872:54:57.440

1900-01-01 873:09:32.107

1900-01-01 873:13:10.560

1900-01-01 873:16:49.867

1900-01-01 888:27:00.640

DateColumn2

-22063-05-18 00:00:00.000

-20285-02-03 00:00:00.000

In addtion there were some other columns that have had odd data in them(VERY WEIRD)

Char25Column

?Q307000
?Q307000
?Q307000

?Q307000

?Q307000

Any Thoughs?

Thanks

Could you send me an email so we can look into this a bit more? DonV@.microsoft.com.(nospam)

Don

|||

Have you run a DBCC CHECKDB lately? You might have some issue with your hardware, storage, or perhaps some 3rd party filter that's somehow modifying your data.

What version of SQL Server is this? If it is SQL Server 2005 hopefully you have database page checksums enabled so that if something does scribble on database pages, then the changes would be detected the next time you read those pages.

Do you have any XPs (extended stored procedures)? A poorly written/tested XP might scribble across SQL Server's memory.

Check your overall system and consider a call to product support. Also, you probably want to check your recent backups to see if the errors occur there as well.


Don

|||

Hi Don, thanks for the quick reply,to answer your question, i have ran DBCC table checks on the table in question and as expected consistancy error were found.( i would have posted them before but I tried to make the post a readable as possible.) (i'll post them below) the odd thing here is the fact that SQL allowed the corrupted data to be inserted in the first place. i mean you would think that SQL servver would throw up an error or two.

As far as the extended sp yes but they wouldnt have been called during the this data input.(btw was push in via informatic) then again reguardless of the client thats pushing and puling the data, SQL server should have had the last called as to what data got commited.

Here are the dbcc table check results( i have ommited a lot of the extract error that were repeated /per row)

DBCC results for 'tLoadedTable'.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "PaidAmt" value is out of range for data type "decimal". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "PaidDt" value is out of range for data type "datetime". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "InvoiceDt" value is out of range for data type "datetime". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "LoadDate" value is out of range for data type "datetime". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Table error: object ID 1762977507, index ID 0, partition ID 72057594677035008. A row should be on partition number 1 but was found in partition number 10. Possible extra or invalid keys for:

Msg 8988, Level 16, State 1, Line 1

Row (85:10:0) identified by (HEAP RID = (85:10:0)).

Too many errors found (201) for object ID 1762977507. To see all error messages rerun the statement using "WITH ALL_ERRORMSGS".

There are 133508601 rows in 9102393 pages for object "tLoadedTable".

CHECKTABLE found 0 allocation errors and 336633 consistency errors in table 'tLoadedTable' (object ID 1762977507).

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Thanks

|||

Im running SQL server 2005 Enterprise SP 2, in addtion this has also happened on another database on a different server also running SQL 2005 SP 2 on a different SAN ( same client though)

Thanks

|||

Don, sorry I missed your question about the page verify mode, currently i running all databases in torn_page_verify mode, were mostly a OLAP shop and dont have a lot of rouge client make data modifications. though based on this little bout i may be changing it to page checksums ( though there seems to be an considerable overhead due to the constaint checksum verifications right? )

Thanks

|||

Since DBCC has detected over 360,000 anomalous data values it's quite likely that you have hardware/storage issues. I suggest you get that looked at before you make any further changes to the data.

It may be that you have a wayward XP that's scribbling outside its intended address space. An XP can write anywhere in SQL Server's writable memory, so even if an XP wasn't supposed to be writing on these pages, it might be doing so accidentally. Of course, a well-written and well-tested XP should do this, but it could have a bug or two that's pointing it in the wrong direction. There's nothing that SQL Server can do to keep your XP from scribbling where it shouldn't, other than replacing XPs with CLR modules as we've enabled with SS2005.

At least page checksums can detect that what was read is not the same as what was written. In your case it would detect these issues if they were caused by the IO system. Torn-page detection might detect an IO problem here, but only if the IO system happened to flip some bits near the torn-page bits on a page (there are only 32 of these bits on an 8K page, so the IO problem could well miss those).

And yes, there is some overhead for the page checksums, but you also would have detected this issue much sooner (if indeed it is an IO issue). You'd have to do some tests to see what the overhead would be for your system; it's hard to predict as it depends on cache hit probabilities and other factors .. but often people find it to be less than they expect .. or they're willing to "pay the price" for it since it will detect issues with the IO/storage system.

So: I suggest you investigate your IO system for issues (check out sqliosim and use it to test your system), and that you double- triple check any XPs to make sure they aren't generating those strings of bytes that you're finding in your dates and decimals (check for any strings/varchars as well .. perhaps it's writing a string of readable characters that you'll be able to use to track down the source of the issues).

Don

|||

Hi Don,

Again thanks for the reply.

As far as IO error there were none, ( funny you'd mentioned the sqliosim utility i had ran a check earlier in the week with no errors found) another thing is that this issue is occurring on two separate servers that also reside on separate Sans, oddly the issue occurred on the exact same partition boundary. as it sits I have a trace running on the entire process and found no evidence that any xp or clr routines were being called. ( just the simple sp_prepare and execute calls for table collation definition information. later followed with a API insert bulk process" that shows no errors during the loading)

Well I have been able to re-produce the issue yet its very weird.

Ok, so if I hadn’t mentioned it before the table in question is partitioned. its partition scheme is also shared by other table ( yet they don’t have the same issue.) the data load is being pushed in via infromatioca API insert bulk call. regardless of which partitioned key is used is still allows the records to be entered yet with a dbcc checktable or simple select is performed on the table it fails and returns the above error.

--

I attempting to loading to other partitioned boundaries and some succeeded with no corruption other succeeded with corruption. the thing that really bothers me is that SQL is allowing this errors to be committed. I have checked the input data and the errors are not there so SQL server has to be corrupting them when the insert bulk block is read into memory.(yet again you would think SQL server would be checking it here)

--test 1

I created another table with the same structure with in the same database but did not add it to a partition scheme( just left it on the primary file group( rather PrimaryData file group, I keep the system objects and user object separate) I ran the same ETL loading process into this new table with no errors and the data was clean( I was able to select against it with no issues and the data look good as expected.)

--test 2

I created another database on the built the same table with partitioned scheme and function within it ( so basically the environment is exactly the same as production minus all other table and the extra data within the partitioned table.)

I ran the same ETL load process against the new database... and it loaded fine, no error no corruption.. ughhh!!!

-- additional

I have ran dbcc dbcheck numerous time yet no errors are even returned. i have placed a call with MS but they seem to think its the third party tool that causing the issue ( though I’m not convinced due to I’m able to load into a non-partitioned table and a partitioned table another database on the same server same disk with no issues.)

YIKES!!!!

Thanks

|||

Yes, it does seem very odd. It would seem that even though a 3rd party API was being used, it shouldn't be able to insert bad values into a datatype. That's why I suspect either an IO issue or an XP issue since the bytes seem be be changed outside of SQL Server's actual control.

I suggest you pursue this some more with MS support together with the 3rd party .. perhaps the 3rd party API is using some XP of its own (some XPs are named sp_xxx, so that can seem like no XPs are being called if you're expecting xp_xxx).

Don

|||

Hi Don, Thanks again for your reply,

I have had my SAN admin review the SAN log to look for any kind latency or stale IO issues ( so far he has been able to find anything. I my self haven’t found anything at the OS or SQL level that you suggest IO or memory issues...) very odd. I have though been able to find out that the insert bulk API that the third party vendor had been using was indeed a odbc call to the bcp.exe, funny though i have since the data being submitted ( I’ve captured the network packets to review the data that was being sent to the bcp process, all is well so the corrupted data is not coming from the third party tool ( potentially its could be the memory block that the odbc driver has consumed when the connection was made to the api. perhaps sql is indeed seeing the correct data value verifying that they are compliant and during the commit process of writing the data to physical disk the memory block is being corrupted by some low level IO driver filter.

In addition after reviewing the trace( I know not all events are trapped by SQL server) again the only operations that are called were the sp_prepare and sp_execute which were unprepared before the api called was made. also in reviewing the third party tools model it show no other sql server functions are called ( just internal application calls)

One other thing was weird, the fact that I was able to load into a non-partitioned table with the same load process (third party tool) as well as another database partitioned table (same scheme, same function, same disk, same server) without data corruption.

Who knows? I’m still perusing the MS avenue and will post my findings( so far it my been a MS bug but there escalation team isn’t in until Monday.)

Meanwhile I have been attempting to read up on driver filters (interesting enough they can modify or FILTER data prior to persisting the data ( i guess it happens in the transfer process from memory to bus to disk.)

Though if anyone has any thought please do tell...

Thanks

Odd date data found in table

Hi all, this is REALLY weird, I can t seem to make heads or tails of it. but from my understanding the each datatype has set contrstraint assigned to it ( example int datatype can only except non-decimal numeric values ) as well as a datetime datatype can only except a vaild date.

Though oddly i have found the following dates in a table ( in addtion the when aby kind of data operation is performed on the table sql returns the following error:

Server: Msg 8630, Level 16, State 1, Line 1

Internal Query Processor Error: The query processor encountered an unexpected error during execution.

Here are some of the dates that were present in the table:

DateColumn1

--

1900-01-01 857:44:45.813

1900-01-01 872:51:16.427

1900-01-01 872:54:57.440

1900-01-01 873:09:32.107

1900-01-01 873:13:10.560

1900-01-01 873:16:49.867

1900-01-01 888:27:00.640

DateColumn2

-22063-05-18 00:00:00.000

-20285-02-03 00:00:00.000

In addtion there were some other columns that have had odd data in them(VERY WEIRD)

Char25Column

?Q307000
?Q307000
?Q307000

?Q307000

?Q307000

Any Thoughs?

Thanks

Have you run a DBCC CHECKDB lately? You might have some issue with your hardware, storage, or perhaps some 3rd party filter that's somehow modifying your data.

What version of SQL Server is this? If it is SQL Server 2005 hopefully you have database page checksums enabled so that if something does scribble on database pages, then the changes would be detected the next time you read those pages.

Do you have any XPs (extended stored procedures)? A poorly written/tested XP might scribble across SQL Server's memory.

Check your overall system and consider a call to product support. Also, you probably want to check your recent backups to see if the errors occur there as well.


Don

|||

Hi Don, thanks for the quick reply,to answer your question, i have ran DBCC table checks on the table in question and as expected consistancy error were found.( i would have posted them before but I tried to make the post a readable as possible.) (i'll post them below) the odd thing here is the fact that SQL allowed the corrupted data to be inserted in the first place. i mean you would think that SQL servver would throw up an error or two.

As far as the extended sp yes but they wouldnt have been called during the this data input.(btw was push in via informatic) then again reguardless of the client thats pushing and puling the data, SQL server should have had the last called as to what data got commited.

Here are the dbcc table check results( i have ommited a lot of the extract error that were repeated /per row)

DBCC results for 'tLoadedTable'.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "PaidAmt" value is out of range for data type "decimal". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "PaidDt" value is out of range for data type "datetime". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "InvoiceDt" value is out of range for data type "datetime". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Page (85:10), slot 0 in object ID 1762977507, index ID 0, partition ID 72057594677035008, alloc unit ID 72057594649313280 (type "In-row data"). Column "LoadDate" value is out of range for data type "datetime". Update column to a legal value.

Msg 2570, Level 16, State 3, Line 1

Table error: object ID 1762977507, index ID 0, partition ID 72057594677035008. A row should be on partition number 1 but was found in partition number 10. Possible extra or invalid keys for:

Msg 8988, Level 16, State 1, Line 1

Row (85:10:0) identified by (HEAP RID = (85:10:0)).

Too many errors found (201) for object ID 1762977507. To see all error messages rerun the statement using "WITH ALL_ERRORMSGS".

There are 133508601 rows in 9102393 pages for object "tLoadedTable".

CHECKTABLE found 0 allocation errors and 336633 consistency errors in table 'tLoadedTable' (object ID 1762977507).

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Thanks

|||

Im running SQL server 2005 Enterprise SP 2, in addtion this has also happened on another database on a different server also running SQL 2005 SP 2 on a different SAN ( same client though)

Thanks

|||

Don, sorry I missed your question about the page verify mode, currently i running all databases in torn_page_verify mode, were mostly a OLAP shop and dont have a lot of rouge client make data modifications. though based on this little bout i may be changing it to page checksums ( though there seems to be an considerable overhead due to the constaint checksum verifications right? )

Thanks

|||

Since DBCC has detected over 360,000 anomalous data values it's quite likely that you have hardware/storage issues. I suggest you get that looked at before you make any further changes to the data.

It may be that you have a wayward XP that's scribbling outside its intended address space. An XP can write anywhere in SQL Server's writable memory, so even if an XP wasn't supposed to be writing on these pages, it might be doing so accidentally. Of course, a well-written and well-tested XP should do this, but it could have a bug or two that's pointing it in the wrong direction. There's nothing that SQL Server can do to keep your XP from scribbling where it shouldn't, other than replacing XPs with CLR modules as we've enabled with SS2005.

At least page checksums can detect that what was read is not the same as what was written. In your case it would detect these issues if they were caused by the IO system. Torn-page detection might detect an IO problem here, but only if the IO system happened to flip some bits near the torn-page bits on a page (there are only 32 of these bits on an 8K page, so the IO problem could well miss those).

And yes, there is some overhead for the page checksums, but you also would have detected this issue much sooner (if indeed it is an IO issue). You'd have to do some tests to see what the overhead would be for your system; it's hard to predict as it depends on cache hit probabilities and other factors .. but often people find it to be less than they expect .. or they're willing to "pay the price" for it since it will detect issues with the IO/storage system.

So: I suggest you investigate your IO system for issues (check out sqliosim and use it to test your system), and that you double- triple check any XPs to make sure they aren't generating those strings of bytes that you're finding in your dates and decimals (check for any strings/varchars as well .. perhaps it's writing a string of readable characters that you'll be able to use to track down the source of the issues).

Don

|||

Hi Don,

Again thanks for the reply.

As far as IO error there were none, ( funny you'd mentioned the sqliosim utility i had ran a check earlier in the week with no errors found) another thing is that this issue is occurring on two separate servers that also reside on separate Sans, oddly the issue occurred on the exact same partition boundary. as it sits I have a trace running on the entire process and found no evidence that any xp or clr routines were being called. ( just the simple sp_prepare and execute calls for table collation definition information. later followed with a API insert bulk process" that shows no errors during the loading)

Well I have been able to re-produce the issue yet its very weird.

Ok, so if I hadn’t mentioned it before the table in question is partitioned. its partition scheme is also shared by other table ( yet they don’t have the same issue.) the data load is being pushed in via infromatioca API insert bulk call. regardless of which partitioned key is used is still allows the records to be entered yet with a dbcc checktable or simple select is performed on the table it fails and returns the above error.

--

I attempting to loading to other partitioned boundaries and some succeeded with no corruption other succeeded with corruption. the thing that really bothers me is that SQL is allowing this errors to be committed. I have checked the input data and the errors are not there so SQL server has to be corrupting them when the insert bulk block is read into memory.(yet again you would think SQL server would be checking it here)

--test 1

I created another table with the same structure with in the same database but did not add it to a partition scheme( just left it on the primary file group( rather PrimaryData file group, I keep the system objects and user object separate) I ran the same ETL loading process into this new table with no errors and the data was clean( I was able to select against it with no issues and the data look good as expected.)

--test 2

I created another database on the built the same table with partitioned scheme and function within it ( so basically the environment is exactly the same as production minus all other table and the extra data within the partitioned table.)

I ran the same ETL load process against the new database... and it loaded fine, no error no corruption.. ughhh!!!

-- additional

I have ran dbcc dbcheck numerous time yet no errors are even returned. i have placed a call with MS but they seem to think its the third party tool that causing the issue ( though I’m not convinced due to I’m able to load into a non-partitioned table and a partitioned table another database on the same server same disk with no issues.)

YIKES!!!!

Thanks

|||

Yes, it does seem very odd. It would seem that even though a 3rd party API was being used, it shouldn't be able to insert bad values into a datatype. That's why I suspect either an IO issue or an XP issue since the bytes seem be be changed outside of SQL Server's actual control.

I suggest you pursue this some more with MS support together with the 3rd party .. perhaps the 3rd party API is using some XP of its own (some XPs are named sp_xxx, so that can seem like no XPs are being called if you're expecting xp_xxx).

Don

|||

Hi Don, Thanks again for your reply,

I have had my SAN admin review the SAN log to look for any kind latency or stale IO issues ( so far he has been able to find anything. I my self haven’t found anything at the OS or SQL level that you suggest IO or memory issues...) very odd. I have though been able to find out that the insert bulk API that the third party vendor had been using was indeed a odbc call to the bcp.exe, funny though i have since the data being submitted ( I’ve captured the network packets to review the data that was being sent to the bcp process, all is well so the corrupted data is not coming from the third party tool ( potentially its could be the memory block that the odbc driver has consumed when the connection was made to the api. perhaps sql is indeed seeing the correct data value verifying that they are compliant and during the commit process of writing the data to physical disk the memory block is being corrupted by some low level IO driver filter.

In addition after reviewing the trace( I know not all events are trapped by SQL server) again the only operations that are called were the sp_prepare and sp_execute which were unprepared before the api called was made. also in reviewing the third party tools model it show no other sql server functions are called ( just internal application calls)

One other thing was weird, the fact that I was able to load into a non-partitioned table with the same load process (third party tool) as well as another database partitioned table (same scheme, same function, same disk, same server) without data corruption.

Who knows? I’m still perusing the MS avenue and will post my findings( so far it my been a MS bug but there escalation team isn’t in until Monday.)

Meanwhile I have been attempting to read up on driver filters (interesting enough they can modify or FILTER data prior to persisting the data ( i guess it happens in the transfer process from memory to bus to disk.)

Though if anyone has any thought please do tell...

Thanks

|||

Could you send me an email so we can look into this a bit more? DonV@.microsoft.com.(nospam)

Don