Showing posts with label datetime. Show all posts
Showing posts with label datetime. Show all posts

Monday, March 19, 2012

Mysteriously changing datetime data

I know this may seem like a weird thing, but today something very strange
happened to one of my database tables. The table has three columns that are
of type datetime. They normally contain values such as 08/12/2004 16:03:35.
One of the three columns, however, now shows either 08/12/2004 only or
08/12/2004 00:00:00. The other two columns are still "normal".
No update queries have been run against this table.
Does anybody know why the dates would have changed like that?DATETIME columns don't contain any inherent format and they always include
both date and time. It looks like your client application, whatever that may
be, is formatting the date differently to what you expect. Check the
regional settings on your PC or any other options in your app that may
affect date formatting.
David Portas
SQL Server MVP
--|||Where are you looking, in Enterprise Manager? Try using SELECT cols FROM
table in Query Analyzer.
http://www.aspfaq.com/
(Reverse address to reply.)
"SackerMan" <SackerMan@.discussions.microsoft.com> wrote in message
news:6529C05A-2374-421C-93C4-6A498163F365@.microsoft.com...
> I know this may seem like a weird thing, but today something very strange
> happened to one of my database tables. The table has three columns that
are
> of type datetime. They normally contain values such as 08/12/2004
16:03:35.
> One of the three columns, however, now shows either 08/12/2004 only or
> 08/12/2004 00:00:00. The other two columns are still "normal".
> No update queries have been run against this table.
> Does anybody know why the dates would have changed like that?|||Hi SackerMan
The values actually stored in a datetime column don't 'look' anything like
what you have shown us. They are stored in an internal unambigous format
that you never see.
How a datetime value 'looks' depends on the tool and the command you use to
display it.
So, how are you actually seeing these values?
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"SackerMan" <SackerMan@.discussions.microsoft.com> wrote in message
news:6529C05A-2374-421C-93C4-6A498163F365@.microsoft.com...
> I know this may seem like a weird thing, but today something very strange
> happened to one of my database tables. The table has three columns that
are
> of type datetime. They normally contain values such as 08/12/2004
16:03:35.
> One of the three columns, however, now shows either 08/12/2004 only or
> 08/12/2004 00:00:00. The other two columns are still "normal".
> No update queries have been run against this table.
> Does anybody know why the dates would have changed like that?

Mysteriously changing datetime data

I know this may seem like a weird thing, but today something very strange
happened to one of my database tables. The table has three columns that are
of type datetime. They normally contain values such as 08/12/2004 16:03:35.
One of the three columns, however, now shows either 08/12/2004 only or
08/12/2004 00:00:00. The other two columns are still "normal".
No update queries have been run against this table.
Does anybody know why the dates would have changed like that?DATETIME columns don't contain any inherent format and they always include
both date and time. It looks like your client application, whatever that may
be, is formatting the date differently to what you expect. Check the
regional settings on your PC or any other options in your app that may
affect date formatting.
--
David Portas
SQL Server MVP
--|||Where are you looking, in Enterprise Manager? Try using SELECT cols FROM
table in Query Analyzer.
--
http://www.aspfaq.com/
(Reverse address to reply.)
"SackerMan" <SackerMan@.discussions.microsoft.com> wrote in message
news:6529C05A-2374-421C-93C4-6A498163F365@.microsoft.com...
> I know this may seem like a weird thing, but today something very strange
> happened to one of my database tables. The table has three columns that
are
> of type datetime. They normally contain values such as 08/12/2004
16:03:35.
> One of the three columns, however, now shows either 08/12/2004 only or
> 08/12/2004 00:00:00. The other two columns are still "normal".
> No update queries have been run against this table.
> Does anybody know why the dates would have changed like that?|||Hi SackerMan
The values actually stored in a datetime column don't 'look' anything like
what you have shown us. They are stored in an internal unambigous format
that you never see.
How a datetime value 'looks' depends on the tool and the command you use to
display it.
So, how are you actually seeing these values?
--
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"SackerMan" <SackerMan@.discussions.microsoft.com> wrote in message
news:6529C05A-2374-421C-93C4-6A498163F365@.microsoft.com...
> I know this may seem like a weird thing, but today something very strange
> happened to one of my database tables. The table has three columns that
are
> of type datetime. They normally contain values such as 08/12/2004
16:03:35.
> One of the three columns, however, now shows either 08/12/2004 only or
> 08/12/2004 00:00:00. The other two columns are still "normal".
> No update queries have been run against this table.
> Does anybody know why the dates would have changed like that?

Mysteriously changing datetime data

I know this may seem like a weird thing, but today something very strange
happened to one of my database tables. The table has three columns that are
of type datetime. They normally contain values such as 08/12/2004 16:03:35.
One of the three columns, however, now shows either 08/12/2004 only or
08/12/2004 00:00:00. The other two columns are still "normal".
No update queries have been run against this table.
Does anybody know why the dates would have changed like that?
DATETIME columns don't contain any inherent format and they always include
both date and time. It looks like your client application, whatever that may
be, is formatting the date differently to what you expect. Check the
regional settings on your PC or any other options in your app that may
affect date formatting.
David Portas
SQL Server MVP
|||Where are you looking, in Enterprise Manager? Try using SELECT cols FROM
table in Query Analyzer.
http://www.aspfaq.com/
(Reverse address to reply.)
"SackerMan" <SackerMan@.discussions.microsoft.com> wrote in message
news:6529C05A-2374-421C-93C4-6A498163F365@.microsoft.com...
> I know this may seem like a weird thing, but today something very strange
> happened to one of my database tables. The table has three columns that
are
> of type datetime. They normally contain values such as 08/12/2004
16:03:35.
> One of the three columns, however, now shows either 08/12/2004 only or
> 08/12/2004 00:00:00. The other two columns are still "normal".
> No update queries have been run against this table.
> Does anybody know why the dates would have changed like that?
|||Hi SackerMan
The values actually stored in a datetime column don't 'look' anything like
what you have shown us. They are stored in an internal unambigous format
that you never see.
How a datetime value 'looks' depends on the tool and the command you use to
display it.
So, how are you actually seeing these values?
HTH
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"SackerMan" <SackerMan@.discussions.microsoft.com> wrote in message
news:6529C05A-2374-421C-93C4-6A498163F365@.microsoft.com...
> I know this may seem like a weird thing, but today something very strange
> happened to one of my database tables. The table has three columns that
are
> of type datetime. They normally contain values such as 08/12/2004
16:03:35.
> One of the three columns, however, now shows either 08/12/2004 only or
> 08/12/2004 00:00:00. The other two columns are still "normal".
> No update queries have been run against this table.
> Does anybody know why the dates would have changed like that?

Monday, March 12, 2012

Mysterious Date Problem

Is there a way to either set Sql Server 2000 or ASP.net datetime fields to a standard format. The problem is that I am passing correct datetime fields using stored procedures and keep getting "Cannot convert datetime into string". It seems to me that many other developers are having that same problem. I tried alot of different methods and still have the same problem. I'm using c# and I never had a problem with datetime fields when I was using vb.net. The problem is that SQl Server is returning datetime formats that are not compatible with c#. I have code that works in other projects but when I try to use that same code I get that conversion error. How do I set the datetime in SQl Server and ASP.Net when I run queries so that the datetime output is in mm/dd/yyyy?What is the statement where you get the error "Cannot convert datetime into string"|||well anytime I return a dataset and bind it to a datagrid I get a conversion error. Sometimes I even got 'The conversion of a char data type to a datetime data type resulted in an out-of-range'. When I ran a simple query in Query Analyzer like "select convert (datetime,'12/20/2003',101)" the output returns 2003-12-20 00:00:00.000
What I need is to be mm/dd/yyyy. Is there a way to change the default setting in Sql Server to that particular format instead of yyyy-mm-dd. Because I think C# is using mm-dd-yyyy as a default. In the asp net datagrid all I'm doing it just binding the dataset to the grid and I get that error? Thanks for your reply...|||you should do the formatting at the front end...for instance when you bind the result set to the datagrid...you can use the dataformatstring property of the datagrid columns to format it to mm/dd/yyyy format which is much simpler rather than enforcing the sql server to some other format..

hth|||Yah, I used that format and that is not the problem. The problem is that I can't even bind the dataset to the datagrid. Its in the binding that is causing the error. But thanks anyway.|||What is the code that you are using??|||basically I just call pass parameters to a method and it returns a dataset

datagrid.DataSource = ds;
datagrid.DataBind();

public DataSet GetOriginationReport( DateTime dtStartDate, DateTime dtEndDate)
{
// instantiate connection and command objects
SqlConnection myConnection = new SqlConnection(ConfigurationSettings.AppSettings["connString2"]);
SqlDataAdapter myCommand = new SqlDataAdapter("rptSelectReport", myConnection);
myCommand.SelectCommand.CommandType = CommandType.StoredProcedure;

myCommand.SelectCommand.Parameters.Add("@.pStartDate", SqlDbType.DateTime );
myCommand.SelectCommand.Parameters.Add("@.pEndDate", SqlDbType.DateTime );

myCommand.SelectCommand.Parameters["@.pStartDate"].Value = dtStartDate;
myCommand.SelectCommand.Parameters["@.pEndDate"].Value = dtEndDate;

DataSet myDS = new DataSet();
myCommand.Fill(myDS,"viewReportPipe");
return myDS;
}

===============================
Stored Procedure
===============================
CREATE PROCEDURE rptSelectReport
(
@.pStartDate DateTime,
@.pEndDate DateTime,
)
As

Select * from viewReportPipe Where AppDate Between @.pStartDate And @.pEndDate|||What result do u get when u run this query in Sql analyzer??
Select * from viewReportPipe Where AppDate Between @.pStartDate And @.pEndDate|||well I get a list of results. The problem is occuring in the binding. When I run the query and bind it to a datagrid it works. But I have sorting enable in the datagrid and when I run that same query and bind it to the grid it fails and gives me that "Cannot convert datetime to string" error. The datefield in the table is "AppDate" and is a datetime datatype and is not null. I know Sequel stores dates internally and is different then what is displayed. I have dates 01/01/1900 as default dates. I just discover that sequel uses that same date as a default date. To me its seems like Sequel is switching datetime formats and C# is not recognizing it as a valid datetime. Thanks for you quick response...|||Also should I changed my select statement and convert the date in the stored procedure
like "select AppDate = convert(datetime,AppDate) from ...|||What value does dtStartDate have??|||dtStartDate has 12/01/2003 12:00:00 AM

and the dtEndDate has 12/31/2003 12:00:00 AM|||I am not sure why you get error??
Cannot convert datetime to string

It must be probably your sorting procedure handling some parameter in wrong manner|||its funny because that what I originally though. I was able to sort by a few columns and it works. But some columns would throw that conversion error. I know I am sorting by the right field name but for some reason and I don't why but Sequel is switching datetime formats. That leads me to my original question that Sequel is returning the recordset as yyyy-mm-dd and and is there a way to change it to mm-dd-yyyy. I notice that other developers are experience the same problem and the answer seems to be that SQl Server and asp.net application have to have the same regional setting(datetime). But I haven't found an answer to change the setting in SQL Server or asp.net application. I know how to change the settings on the server but that doesn't help.|||Post your Sorting code...

Wednesday, March 7, 2012

My table design - where did I go wrong?

CREATE TABLE Sales1
(
varchar (10) CustID,
varchar (10) TransID,
datetime SaleDate,
money S1,
money S2,
money S3,
money S4,
numeric V1
)
CREATE NONCLUSTERED INDEX Sales1_CustID ON Sales1 (CustID)
CREATE NONCLUSTERED INDEX Sales1_SaleDate ON Sales1 (SaleDate)
"money" is just the right size for my fields. This table has 9,500,000
records.
Although I need to do this select in less than a half second, it takes 1
full minute:
SELECT MAX(S1) FROM Sales1 WHERE SaleDate > '1/1/2005'
And I need to do this select in less than a half second, but it takes 3
minutes:
SELECT AVG(S1 / S2) FROM Sales1 WHERE S3 > S4
Am I supposed to create a new field with these values pre-calculated? I hope
not, because I have several other formulas - up to 500 different types of
selects which are all similar.
I am testing with SQL 2000 Developer. XEON CPU, fast SATA hard drive and 2GB
of memory.
ThanksHi
The 1st query can use the index your created on SaleDate.
The 2nd query has no option, but to read every row (9.5 million) in the
table to find where S3 > S4, and then based on that subset, so the S1/S2
evaluation.
Either add and index, to restrict the select to something that can use one
of the indexes.
Regards
--
Mike Epprecht, Microsoft SQL Server MVP
Zurich, Switzerland
IM: mike@.epprecht.net
MVP Program: http://www.microsoft.com/mvp
Blog: http://www.msmvps.com/epprecht/
"Rich" <no@.spam.invalid> wrote in message
news:G_uGe.53624$4o.5775@.fed1read06...
> CREATE TABLE Sales1
> (
> varchar (10) CustID,
> varchar (10) TransID,
> datetime SaleDate,
> money S1,
> money S2,
> money S3,
> money S4,
> numeric V1
> )
> CREATE NONCLUSTERED INDEX Sales1_CustID ON Sales1 (CustID)
> CREATE NONCLUSTERED INDEX Sales1_SaleDate ON Sales1 (SaleDate)
> "money" is just the right size for my fields. This table has 9,500,000
> records.
> Although I need to do this select in less than a half second, it takes 1
> full minute:
> SELECT MAX(S1) FROM Sales1 WHERE SaleDate > '1/1/2005'
> And I need to do this select in less than a half second, but it takes 3
> minutes:
> SELECT AVG(S1 / S2) FROM Sales1 WHERE S3 > S4
> Am I supposed to create a new field with these values pre-calculated? I
> hope
> not, because I have several other formulas - up to 500 different types of
> selects which are all similar.
> I am testing with SQL 2000 Developer. XEON CPU, fast SATA hard drive and
> 2GB
> of memory.
> Thanks
>
>|||Shoot... SQL server might not work for me. I have to be able to calculate
sums, averages, etc. based on a number of different fields, and there are
millions of records. I have to get results within milliseconds. Is there a
way I can tell SQL server to load *everything* into memory?
"Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
news:OVlgKgHlFHA.1440@.TK2MSFTNGP14.phx.gbl...
> Hi
> The 1st query can use the index your created on SaleDate.
> The 2nd query has no option, but to read every row (9.5 million) in the
> table to find where S3 > S4, and then based on that subset, so the S1/S2
> evaluation.
> Either add and index, to restrict the select to something that can use one
> of the indexes.
> Regards
> --
> Mike Epprecht, Microsoft SQL Server MVP
> Zurich, Switzerland
> IM: mike@.epprecht.net
> MVP Program: http://www.microsoft.com/mvp
> Blog: http://www.msmvps.com/epprecht/
> "Rich" <no@.spam.invalid> wrote in message
> news:G_uGe.53624$4o.5775@.fed1read06...
of
>|||Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, data types, etc. in
your schema are. Sample data is also a good idea, along with clear
specifications. It is very hard to debug code when you do not let us
see it.
But based on your weird, vague personal language:
1) You have no key because in NULL-able
2) You don't know the proper format for dates in SQL
3) You don't know about the math errors in the deprecated MONEY data
type - use DECIMAL(s,p) instead.
4) You don't know how to declare NUMERIC (s,p) data
5) Either you were sloppy or you don't know what a repeating group and
1NF are.
6) You don't know that rows are not records and columns are not fields.
if you cared about the people helping your for free, would your DDL
look like this?
CREATE TABLE SalesTransactions
(trans_nbr CHAR(10) NOT NULL PRIMARY KEY,
cust_id CHAR(10) NOT NULL
REFERENCES Customers (cust_id)
ON UPDATE CASCADE,
sales_date DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL
CHECK (<<SET to 00:00:00 Hrs >> ),
repeating_grp_1 DECIMAL (12,4) DEFAULT 0.00 NOT NULL,
repeating_grp_2 DECIMAL (12,4) DEFAULT 0.00 NOT NULL,
repeating_grp_3 DECIMAL (12,4) DEFAULT 0.00 NOT NULL,
repeating_grp_4 DECIMAL (12,4) DEFAULT 0.00 NOT NULL,
vague_thing_1 NUMERIC(12,4) DEFAULT 0.00 NOT NULL);
You can use a CASE expession with aggregates to do all of these things
in one statement. You will simply have to make a table scan for some
of them, so indexing will not help. Do them all at once or put them in
a VIEW.
CREATE VIEW Godzilla (..)
AS
SELECT MAX(CASE WHEN sales_date SaleDate > '2005-01-01'
THEN repeating_grp_1 ELSE NULL END)
AS max_1,
AVG (CASE WHEN repeating_grp_3 > repeating_grp_S4
THEN repeating_grp_1/repeating_grp_2)
ELSE NULL END
AS foobar_avg,
etc.
FROM SalesTransactions1;
But this table feels like a set of design flaws.|||How about thinking a little outside the norm. Denormalisation may be useful
here.
The idea is that you have a table which contains the results of the query
already done.
Then you could add triggers or maybe a SQL Job to re-calculate the
aggregates, and fill in a table.
Averages (or more correctly the mean value) can be calculated easily, during
the insert. Update's and delete's may be a little more tricky, but then
that's there the JOB comes in. If it's not critical that the aggregate is
calculated NOW, but can be done lets say once a day then a SQL Job could be
setup to run re-calculate every so often. Does it really matter if it takes
a minute if no one is sat there waiting for it? OK, not what you wanted to
hear I know, but it's a possibility.
Another thought is that you may want to take a look into using composite
indexes. For you second query, try this index it may help to improve the
performance (my SQL box is at work so I can't try it)
Before you create the index, run the query and get SQL to display the Actual
Execution plan. You'll notice that the Query is performing a Table Scan,
(or if you've now created a clustered index, as Clustered Index scan). Use
SQL Profiler to take a reading of the Number of Reads that SQL needs to
execute your query.
Now Create the following index
Create NonClustered Index Sales1_S3_S4_S1_S2 On Sales1 ( S3, S4, S1, S2 )
If I've got it right, when you re-execute the query, the execution plan will
show that it's performing an Index Scan Or possibly even and Index S (but
I don't think this will happen) Now use SQL Profiler to get the number of
reads, you'll notice that it was alot lower. Also the execution time should
be lower too. Now turn off the Execution Plan, and stop the profiler to
get the production speed of the query.
I know that I've rambled on a bit, but I hope it gives you some ideas.
Regards
Colin Dawson
www.cjdawson.com
"Rich" <no@.spam.invalid> wrote in message
news:EjxGe.53631$4o.43600@.fed1read06...
> Shoot... SQL server might not work for me. I have to be able to calculate
> sums, averages, etc. based on a number of different fields, and there are
> millions of records. I have to get results within milliseconds. Is there a
> way I can tell SQL server to load *everything* into memory?
> "Mike Epprecht (SQL MVP)" <mike@.epprecht.net> wrote in message
> news:OVlgKgHlFHA.1440@.TK2MSFTNGP14.phx.gbl...
> of
>|||Thanks I'll give that a try. But I have to recalculate those 500 fields
every 5 minutes... do you suppose that "Real Time OLAP" would help in this
case?
"Colin Dawson" <newsgroups@.cjdawson.com> wrote in message
news:iJxGe.79457$G8.17569@.text.news.blueyonder.co.uk...
> How about thinking a little outside the norm. Denormalisation may be
useful
> here.
> The idea is that you have a table which contains the results of the query
> already done.
> Then you could add triggers or maybe a SQL Job to re-calculate the
> aggregates, and fill in a table.
> Averages (or more correctly the mean value) can be calculated easily,
during
> the insert. Update's and delete's may be a little more tricky, but then
> that's there the JOB comes in. If it's not critical that the aggregate is
> calculated NOW, but can be done lets say once a day then a SQL Job could
be
> setup to run re-calculate every so often. Does it really matter if it
takes
> a minute if no one is sat there waiting for it? OK, not what you wanted
to
> hear I know, but it's a possibility.
> Another thought is that you may want to take a look into using composite
> indexes. For you second query, try this index it may help to improve the
> performance (my SQL box is at work so I can't try it)
> Before you create the index, run the query and get SQL to display the
Actual
> Execution plan. You'll notice that the Query is performing a Table Scan,
> (or if you've now created a clustered index, as Clustered Index scan).
Use
> SQL Profiler to take a reading of the Number of Reads that SQL needs to
> execute your query.
> Now Create the following index
> Create NonClustered Index Sales1_S3_S4_S1_S2 On Sales1 ( S3, S4, S1, S2 )
> If I've got it right, when you re-execute the query, the execution plan
will
> show that it's performing an Index Scan Or possibly even and Index S
(but
> I don't think this will happen) Now use SQL Profiler to get the number of
> reads, you'll notice that it was alot lower. Also the execution time
should
> be lower too. Now turn off the Execution Plan, and stop the profiler to
> get the production speed of the query.
> I know that I've rambled on a bit, but I hope it gives you some ideas.
> Regards
> Colin Dawson
> www.cjdawson.com
> "Rich" <no@.spam.invalid> wrote in message
> news:EjxGe.53631$4o.43600@.fed1read06...
calculate
are
a
the
S1/S2
9,500,000
takes
3
I
types
>|||Honestly, I've not looked at OLAP yet. I really think that I should do at
some point soon.
re-calculating 500 fields every five mins! ouch. I know that this idea
could end up in a complete tangle, but maybe it would be worth splitting the
workload down. For example if you're working out averages, use the real
data to work out the averages for portions of the data, then to get the
average for the whole, use the results from the portions to calculate the
whole. Also does the entire table have to be re-calculated? If not, don't
do the whole table. For example...
Say that you're talking about sales for the last w. Today is Sunday, and
as the information is entered into the system each day, and the previous
days information doesn't get altered, there's no need to re-calulate
yesterdays average, it's static.
Calculating the average for the w so far is as simple as calculating the
average for monday to saturday - i.e. The six seperate averages for added
together, then divided by 6. Then all you need to do is include todays
average, which will only ever be on a small dataset compared to the entire
table. Bet this can be calulated in a matter of ms.
I can't give you any specifics on how to get the best performance from your
database, for the simple reasong that I don't have a full and complete
understanding of your data. This is the key to getting maximum performance
out of any database.
Regards
Colin Dawson
www.cjdawson.com
"Rich" <no@.spam.invalid> wrote in message
news:exyGe.53638$4o.28608@.fed1read06...
> Thanks I'll give that a try. But I have to recalculate those 500 fields
> every 5 minutes... do you suppose that "Real Time OLAP" would help in this
> case?
> "Colin Dawson" <newsgroups@.cjdawson.com> wrote in message
> news:iJxGe.79457$G8.17569@.text.news.blueyonder.co.uk...
> useful
> during
> be
> takes
> to
> Actual
> Use
> will
> (but
> should
> calculate
> are
> a
> the
> S1/S2
> 9,500,000
> takes
> 3
> I
> types
>