How to input time in C++

anishannayya

Member
Jun 10, 2008
136
0
0
Okay, I am creating a program that will calculate the cost of a long distance call. I however, an stuck on the part that says the time is to be input in 24-hour notation, so if the time is 1:30 P.M., it is input as 13:00. The problem is that the rates change at different times. Originally I was thinking I would make my statements like: (time <= 18:00), but then I realized I would get a syntax error. What should I do?
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Read the data as a string, parse it into it's different components, hour, minute, and second. Then you can do any math you need to figure out times, time spans, or whatever you need to do with it. If the string isn't in the right format you should alert the user, and show them an example of what to enter.
 

brandonb

Diamond Member
Oct 17, 2006
3,731
2
0
Originally posted by: anishannayya
Okay, I am creating a program that will calculate the cost of a long distance call. I however, an stuck on the part that says the time is to be input in 24-hour notation, so if the time is 1:30 P.M., it is input as 13:00. The problem is that the rates change at different times. Originally I was thinking I would make my statements like: (time <= 18:00), but then I realized I would get a syntax error. What should I do?

I suggest that you read up on datatypes.

Find out the difference between a string, float, and an integer. How they are stored in memory would be useful, and how to convert from one data type to another. Using casting, or other methods.

Here are the things you will need to know before attempting such a feat:

1) How to convert a start time/end time from string format into an integer.
2) Doing math between said numbers to get the # of minutes in rate period.
3) Taking the number of minutes * rate to get the final total in float form, and then convert back to a string from float.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
as others have said, this is a simple data type problem. Read the time in as a string. look for the :, then preform some math-magic on it to convert it to an integer ect (atoi). Not a difficult problem really (CS101?)
 
May 11, 2008
21,910
1,347
126
From string to integer is real easy :

ascii table : 0 = 0x30 , 1= 0x31, 2= 0x32, ..... 9=0x39.
you have multiple ways, you can use a switch case construction. where you just test on pure ascii code. like '0' '1' '2' etc...
Or you can cast parts of the string array which have numbers in them to an integer and then subtract 0x30.

A simple if statement for the existance of P in the pm or am part of the string and you add 12 to your "hour" integer. Then you have 24 hour format.

Afcourse you have to parse the string first to make sure you have the right elements of the string array. But that should be easy too.
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
You can simply your life by simply representing all times as seconds when doing math. For instance, 1:14PM and 10 seconds is 13:14:10, which is 13*3600 + 14*60 + 10 seconds, or 47650 seconds. The only caveat is that for times that cross midnight, e.g. from 23:xx:yy to 0:zz:Qq, you will need to use the modulo operator (%), and mod the result by 24*3600.

Avoid floats for this problem.

Potentially useful library calls: atoi, strtok
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
Originally posted by: degibson
You can simply your life by simply representing all times as seconds when doing math. For instance, 1:14PM and 10 seconds is 13:14:10, which is 13*3600 + 14*60 + 10 seconds, or 47650 seconds. The only caveat is that for times that cross midnight, e.g. from 23:xx:yy to 0:zz:Qq, you will need to use the modulo operator (%), and mod the result by 24*3600.

Avoid floats for this problem.

Potentially useful library calls: atoi, strtok

That doesn't simplify your life if the user is entering in a string. "Ok, you can use my program to enter in your times, but you have to convert all your times to unix time so I can process it easier"

Onces you have converted the string to some meaningful time representation your job is pretty much done (Objects can actually be pretty useful for this, yes a bit slower, but easier for mere mortals to comprehend.)
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
Originally posted by: Cogman
Originally posted by: degibson
You can simply your life by simply representing all times as seconds when doing math. ...

That doesn't simplify your life if the user is entering in a string. "Ok, you can use my program to enter in your times, but you have to convert all your times to unix time so I can process it easier"
...

It was unix time that I was thinking of. The only difference is that I was suggesting 'seconds since midnight' rather than 'seconds since the epoch', as it is a little more intuitive for a beginning to use midnight than 12AM 1 January 1970.
 

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
'seconds since' time representations are by far the easiest and fastest methods for time comparisons and storage. If you were to go the route of three separate variables to represent hours, minutes, and seconds, you would need three different comparisons in order to know whether a certain time was less or more. You need three different memory address spaces to store them. And assembling it back into a readable format is not simpler.

Time in seconds, whether that be seconds from 1970 or relative seconds from midnight, is the way to go about that.
 

sao123

Lifer
May 27, 2002
12,653
205
106
the best solution is to write your own time class having members HH & MM and then write a compare operator function to compare 2 objects of type time.

This is elementary object oriented programming.
 

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
Just because you're overloading the compare operator in the calling code does not mean that multiple comparisons aren't still happening.

While I agree that a class would be a good way to go about this implementation, the time itself should still be stored as a "seconds from" value. This simplifies computation, comparison, and storage.
 

sao123

Lifer
May 27, 2002
12,653
205
106
Originally posted by: drebo
Just because you're overloading the compare operator in the calling code does not mean that multiple comparisons aren't still happening.

While I agree that a class would be a good way to go about this implementation, the time itself should still be stored as a "seconds from" value. This simplifies computation, comparison, and storage.

sure its a very good implementation, and for homework it might be acceptable. However there are limitations, and it was thought like this which caused the Y2K problem to begin with.

It is easier to overflow 1 variable, than it would be to fill up a full date time class (Month, Day, Year, Hours, Minutes, Seconds)
You must always be foreward thinking.
 
May 11, 2008
21,910
1,347
126
I guess the use of an unsigned integer 64 bits long for the amount of seconds should have solved the y2k problem if only 1 variable was used. But then again that are 8 bytes.


For storage sake :

days in unsigned integer 65535 days 2 bytes.
hours in bcd format 99 hours. 1 byte.
minutes in bcd format 99 minutes. 1 byte.
seconds in bcd format 99 seconds. 1 byte.


Some more or less history:
bcd stands for binary coded decimal. 4 bits gives you 16 different combinations of bits.
And only the combinations that code for 0 to 9 are used. This way you can pack two bcd's in 1 byte. Easy for storage sake and you don't have to do a lot of math. Just sent the 4 bits to the 7 segment display driver and you have the number on the display. In old times this saved a lot of calculations and very expensive storage (read ram).
Used in discrete logic chips, microconrollers and old cpu's.

/end history.

This would give you 179 years and some change.
3 bytes saved.

But if the days where 4 bytes. you would have more then 11 million years.
that's 7 bytes. 1 byte saved.

And easy to implement in logic. just bcd counters carrying bits to eachother and afcourse keeping track of the numbers 60 and 24.

Long ago memory was precious and costly and a dumb tradeoff was made in the pc world.

But i have to agree that for math and compare sake 1 big unsigned integer is easier.


sure its a very good implementation, and for homework it might be acceptable. However there are limitations, and it was thought like this which caused the Y2K problem to begin with.

But how was the date and time stored in pc's originally ?

I have no clue. I'll google some time later.

I do seem to remember Apple never had this y2k problem.


I am personnally spoiled with real time clock chips with enough ram to store seconds/10, seconds,minutes,hours,day,months,and years.
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
Originally posted by: sao123
Originally posted by: drebo
Just because you're overloading the compare operator in the calling code does not mean that multiple comparisons aren't still happening.

While I agree that a class would be a good way to go about this implementation, the time itself should still be stored as a "seconds from" value. This simplifies computation, comparison, and storage.

sure its a very good implementation, and for homework it might be acceptable. However there are limitations, and it was thought like this which caused the Y2K problem to begin with.

It is easier to overflow 1 variable, than it would be to fill up a full date time class (Month, Day, Year, Hours, Minutes, Seconds)
You must always be foreward thinking.

Integer overflow: 2^31 - 1: > 2 Billion
Unsigned integer overflow: 2^32: >4 billion
64-bit unsigned integer overflow: > 18 quintillion

Convert from seconds to years:
Integers: 68 years
Unsigned Ints: 136 years
64-bit uints: 58 million millenia

In short, the sun will burn out before a 64-bit value is overflowed by a second-ticking clock. If the OP's code has to still be running at that time, then there is an overflow chance.

As an aside, if you use a 32-bit value to represent the month, another for day, year, etc., you get 4 million millenia worth of time, at the same resolution.
 

sao123

Lifer
May 27, 2002
12,653
205
106
Originally posted by: William Gaatjes
I guess the use of an unsigned integer 64 bits long for the amount of seconds should have solved the y2k problem if only 1 variable was used. But then again that are 8 bytes.


For storage sake :

days in unsigned integer 65535 days 2 bytes.
hours in bcd format 99 hours. 1 byte.
minutes in bcd format 99 minutes. 1 byte.
seconds in bcd format 99 seconds. 1 byte.


Some more or less history:
bcd stands for binary coded decimal. 4 bits gives you 16 different combinations of bits.
And only the combinations that code for 0 to 9 are used. This way you can pack two bcd's in 1 byte. Easy for storage sake and you don't have to do a lot of math. Just sent the 4 bits to the 7 segment display driver and you have the number on the display. In old times this saved a lot of calculations and very expensive storage (read ram).
Used in discrete logic chips, microconrollers and old cpu's.

/end history.

This would give you 179 years and some change.
3 bytes saved.

But if the days where 4 bytes. you would have more then 11 million years.
that's 7 bytes. 1 byte saved.

And easy to implement in logic. just bcd counters carrying bits to eachother and afcourse keeping track of the numbers 60 and 24.

Long ago memory was precious and costly and a dumb tradeoff was made in the pc world.

But i have to agree that for math and compare sake 1 big unsigned integer is easier.


sure its a very good implementation, and for homework it might be acceptable. However there are limitations, and it was thought like this which caused the Y2K problem to begin with.

But how was the date and time stored in pc's originally ?

I have no clue. I'll google some time later.

I do seem to remember Apple never had this y2k problem.


I am personnally spoiled with real time clock chips with enough ram to store seconds/10, seconds,minutes,hours,day,months,and years.


Most hardware systems stored the date as an integer from Jan 1, 1980. This integer was incremented 18.2 times a second. ( the freqency of the clock generator ). It was a 32 bit field. At some point, the system must reach overflow.
I did the math and came out with 2^32 / (18.2*60*60*24*365) = 7.4 years, so im missing one other multiplier in there (to make it balance out to the 20 years), but this was the basis for Y2K.
 
May 11, 2008
21,910
1,347
126
Most hardware systems stored the date as an integer from Jan 1, 1980. This integer was incremented 18.2 times a second. ( the freqency of the clock generator ). It was a 32 bit field. At some point, the system must reach overflow. I did the math and came out with 2^32 / (18.2*60*60*24*365) = 7.4 years, so im missing one other multiplier in there (to make it balance out to the 20 years), but this was the basis for Y2K.

In the article the writer mentioned 2^31. The MSB is used as a sign. I don't know why, we don't have negative time. But according to the article an int is used. That explaines it. Somebody was not thinking when implementing this ?

Keeping the sign bit in the back of our heads i come to the following calculations :
2^31 /18.2 *3600*24*365 = 3.74 years. But in the article it is mentioned that the integer is counting seconds. 3,74 * 18.2 = 68 years. 1970 + 68 = 2038. There is the missing link. :confused:

 

sao123

Lifer
May 27, 2002
12,653
205
106
Originally posted by: William Gaatjes
Most hardware systems stored the date as an integer from Jan 1, 1980. This integer was incremented 18.2 times a second. ( the freqency of the clock generator ). It was a 32 bit field. At some point, the system must reach overflow. I did the math and came out with 2^32 / (18.2*60*60*24*365) = 7.4 years, so im missing one other multiplier in there (to make it balance out to the 20 years), but this was the basis for Y2K.

In the article the writer mentioned 2^31. The MSB is used as a sign. I don't know why, we don't have negative time. But according to the article an int is used. That explaines it. Somebody was not thinking when implementing this ?

Keeping the sign bit in the back of our heads i come to the following calculations :
2^31 /18.2 *3600*24*365 = 3.74 years. But in the article it is mentioned that the integer is counting <seconds. 3,74 * 18.2 = 68 years. 1970 + 68 = 2038. There is the missing link. :confused:

thank you.
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
Y2K was the problem of using two bytes in BCD format to store the last two digits of the year only. BCD math would have caused 0x99 to roll over to 0x00, hence '2000' would be < '1999' -- potentially causing all kinds of ugliness in old, important codes (read: financial institutions) that were coded decades ago and are still in use today.