Is it possible to read and write to same file, at same time?

Red Squirrel

No Lifer
May 24, 2003
69,726
13,343
126
www.betteroff.ca
using fstream, is it possible to open a file, and be able to read and write in that file, within the same session?

I can't seem to get this to work, so I'm wondering if it's even possible.

Basically, something like this:

Code:
	fstream fileio;
	fileio.open("testfile.txt",fstream::in | fstream::out | fstream::app | fstream::binary);
	
	fileio.write("this is a test",14);
	
	fileio.seekg(1);
	
	char buffer[6];
	
	fileio.read(buffer,6);

	cout<<buffer<<endl; //should read "his is"
	
	fileio.close();

Or am I better off closing the file each time I'm done an operation? Will this slow down the program if there are a lot of operations? If I do leave it open and get this working, what happens if the program crashes or the file is accessed by another program?


Also, why do I get garbage characters in this code?

Code:
	fstream fileio;
	fileio.open("testfile.txt",fstream::in | fstream::binary);
	/*
	fileio.write("this is a test",14);
	*/
	fileio.seekg(10); //if I remove this it works fine
	
	char buffer[2];
	
	fileio.read(buffer,2);

	cout<<buffer<<"[end]"<<endl; //should return "te" but returns "te" with garbage chars at the end.
	
	fileio.close();
 
Last edited:

esun

Platinum Member
Nov 12, 2001
2,214
0
0
Problem with the first program is usage of the append flag. That is only valid for output-only file streams. This works fine (see comment on second program for reasoning behind null-termination of the char array):

Code:
#include <fstream>
#include <iostream>

using namespace std;

int main()
{
  fstream fileio;
  fileio.open("testfile.txt", ios::in | ios::out | ios::binary);

  fileio.write("this is a test",14);
  
  fileio.seekg(1);
  
  char buffer[7];
  
  fileio.read(buffer,6);
  buffer[6] = '\0';
  
  cout<<buffer<<endl;

  fileio.close();
}

The problem with your second program is the lack of a null-terminator on your C-style string. Recall that you don't know when the end of a plain old character array is without some sort of indicator. The standard indicator is '\0', so you need to null-terminate your char array if you want to print it out that way.
 

Red Squirrel

No Lifer
May 24, 2003
69,726
13,343
126
www.betteroff.ca
I partially got it working except for the 2nd program. I applied the code to my actual class now.

As for the null terminator, what happens if I want to actually read 0x0 as a char? I forgot about the need for null termination for old c style strings... is there another way? This gets converted to a string at some point. Anyway I can just read directly to a string?

This will be binary data so there will be 0x0 and it will have to be read as such. For example reading a 32bit int will consist of reading 4 bytes and it could easily be equal to something like 0x0 0x1 0xff 0xff for the number 65536, for example.

Edit: never mind, I see what you mean now. Think I will be ok, need to do more testing but looks like this is working ok for me.
 

Red Squirrel

No Lifer
May 24, 2003
69,726
13,343
126
www.betteroff.ca
I wrote my class that handles this quite well so far for my purpose. Now one more question, is it safe to just keep the file open the whole time, or should I open and close after each operation? According to my testing it seems to be safe to just leave it open but wondering what other' opinions are on this I imagine big programs like exchange just leave the store open all the time since it slows down if you keep closing and opening.
 

BoberFett

Lifer
Oct 9, 1999
37,562
9
81
It is possible to run out of file handles. How many files do you have open simultaneously if you never close the file? How does file opening and closing affect disk writes? If your program or the system crashes while the file is open, what are the effects? Is the file corrupted or left in a usable state? Have you done benchmarks to see if it even makes a difference?

You're still better off just using SQL. The fact that you don't understand file operations makes it plainly obvious you're new to programming. Don't reinvent the wheel.
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
69,726
13,343
126
www.betteroff.ca
It is possible to run out of file handles. How many files do you have open simultaneously if you never close the file? How does file opening and closing affect disk writes? If your program or the system crashes while the file is open, what are the effects? Is the file corrupted or left in a usable state? Have you done benchmarks to see if it even makes a difference?

You're still better off just using SQL. The fact that you don't understand file operations makes it plainly obvious you're new to programming. Don't reinvent the wheel.

It has nothing to do with not understanding, I want to learn. The only way to know, is to be thought. And this has nothing to do with Mysql.
 

esun

Platinum Member
Nov 12, 2001
2,214
0
0
I don't see a problem with keeping the file handle open. As long as you're using it there's no reason to open/close it constantly. It would just be a hassle and a slight hit in performance.
 

Red Squirrel

No Lifer
May 24, 2003
69,726
13,343
126
www.betteroff.ca
That's what I figured. It does seem to write in real time and not buffer anywhere in memory, so that's good. I terminated my app without proper file closure and the data was still written.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
That's what I figured. It does seem to write in real time and not buffer anywhere in memory, so that's good. I terminated my app without proper file closure and the data was still written.

No, it does buffer writes until you call something like fsync. You just see the results via something like cat because the OS is smart enough to give you the contents from memory.
 

Red Squirrel

No Lifer
May 24, 2003
69,726
13,343
126
www.betteroff.ca
No, it does buffer writes until you call something like fsync. You just see the results via something like cat because the OS is smart enough to give you the contents from memory.

Actually I was using Windows (smb share) but to be on the safe side I added a sync() function to my class which I can call after I'm done a group of file operations. It closes and reopens the file.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Actually I was using Windows (smb share) but to be on the safe side I added a sync() function to my class which I can call after I'm done a group of file operations. It closes and reopens the file.

SMB/CIFS is just another type of filesystem and since it's a network filesystem caching is even more important for performance. And closing/reopening the file is pointless as it won't force the flushining of anything more than sync/fsync will.
 

Red Squirrel

No Lifer
May 24, 2003
69,726
13,343
126
www.betteroff.ca
SMB/CIFS is just another type of filesystem and since it's a network filesystem caching is even more important for performance. And closing/reopening the file is pointless as it won't force the flushining of anything more than sync/fsync will.

Huh, so closing the file is useless? all this time when writing anything to disk I just close the file when done and call it a day. Never had issues but maybe I've been lucky. I thought that was the whole point, making sure the data is written. so should I be calling sync after each operation, and should I also call flush?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Huh, so closing the file is useless? all this time when writing anything to disk I just close the file when done and call it a day. Never had issues but maybe I've been lucky. I thought that was the whole point, making sure the data is written. so should I be calling sync after each operation, and should I also call flush?

I'd say so, why would you assume that closing a file flushes it's data to disk?
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
http://www.cplusplus.com/reference/clibrary/cstdio/fclose/

int fclose ( FILE * stream );

Closes the file associated with the stream and disassociates it.
All internal buffers associated with the stream are flushed: the content of any unwritten buffer is written and the content of any unread buffer is discarded.

I'm more familiar with straight C and the man page for close(2) says:

Code:
A successful close does not guarantee that the data has been successfully saved to disk, as the kernel defers writes.  It is not
       common for a file system to flush the buffers when the stream is closed.  If you need to be sure that  the  data  is  physically
       stored use fsync(2).  (It will depend on the disk hardware at this point.)

I guess I just tend to err on the opposite side because I know how VM works and how heavily things are cached with the amount of memory people have these days.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
I'm more familiar with straight C and the man page for close(2) says:

Code:
A successful close does not guarantee that the data has been successfully saved to disk, as the kernel defers writes.  It is not
       common for a file system to flush the buffers when the stream is closed.  If you need to be sure that  the  data  is  physically
       stored use fsync(2).  (It will depend on the disk hardware at this point.)

I guess I just tend to err on the opposite side because I know how VM works and how heavily things are cached with the amount of memory people have these days.

All versions of fclose() that I'm familiar with (C and C++) flush the write buffers, and that's been standard behavior for at least twenty years. I'm not sure about _close() (if that's the version you're referring to), but I think it is a lower-level function and may very well not guarantee the flushing of write buffers.
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
In truth, every file system has its own set of semantic guarantees. close() on EXT2 has different guarantees than close() on EXT3, and close() on CIFS similarly has different guarantees than a local file system. To figure out what guarantees are available, you have to read up on the target file system, not the system call you are using to access it.

E.g., your close()/re-open() trick would make your file updates visible to other processes in an AFS-based network file system when the close() commits, but in NFS writes are often propagated earlier than a close().

If you need some sort of consistency guarantee in the face of an arbitrary failure, I suggest you use a DB.
 

Red Squirrel

No Lifer
May 24, 2003
69,726
13,343
126
www.betteroff.ca
http://www.cplusplus.com/reference/clibrary/cstdio/fclose/

int fclose ( FILE * stream );

Closes the file associated with the stream and disassociates it.
All internal buffers associated with the stream are flushed: the content of any unwritten buffer is written and the content of any unread buffer is discarded.

That's what I figured, for close() it says something similar:

http://www.cplusplus.com/reference/iostream/fstream/close/

Closes the file currently associated with the object, disassociating it from the stream. Any pending output sequence is written to the physical file.

The function effectively calls rdbuf()->close().

The function fails if no file is currently open (associated) with this object.
 

Markbnj

Elite Member <br>Moderator Emeritus
Moderator
Sep 16, 2005
15,682
14
81
www.markbetz.net
In truth, every file system has its own set of semantic guarantees. close() on EXT2 has different guarantees than close() on EXT3, and close() on CIFS similarly has different guarantees than a local file system. To figure out what guarantees are available, you have to read up on the target file system, not the system call you are using to access it.

E.g., your close()/re-open() trick would make your file updates visible to other processes in an AFS-based network file system when the close() commits, but in NFS writes are often propagated earlier than a close().

If you need some sort of consistency guarantee in the face of an arbitrary failure, I suggest you use a DB.

I was going to reply that yes, that is good advice, but the behavior of fclose() is specified by the '99 C standard, so at least you should be able to rely on that. But what it says is...

"A successful call to the fclose function causes the stream pointed to by stream to be &#64258;ushed and the associated &#64257;le to be closed. Any unwritten buffered data for the stream are delivered to the host environment to be written to the &#64257;le; any unread buffered data are discarded."

So basically a handoff to the OS with a statement of intent. Practically speaking RS should be able to rely on this behavior on any major file system he is likely to be using in a desktop environment.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
All versions of fclose() that I'm familiar with (C and C++) flush the write buffers, and that's been standard behavior for at least twenty years. I'm not sure about _close() (if that's the version you're referring to), but I think it is a lower-level function and may very well not guarantee the flushing of write buffers.

Not _close(), just close() which is part of unistd.h and what I initially think of when I think about closing file descriptors. Now that I see that, I'm sure fclose() is the recommended method to use, but AFAIK close() is part of the C standard and isn't internal or anything.
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
close() operates on a file descriptor, fclose() operates on a bufferred FILE* (a handle to a file in C). close() is a posix system call, fclose() is a c-language library call.
[edit]e.g., stdout and stdin are of type FILE* -- file descriptors STDOUT_FILENO and STDIN_FILENO are of type int (for use with system calls) (usually 0 and 1 IIRC). [/edit]

In general, the f* family of funcitons (fprintf, fclose, fwrite, fread, etc.) are a buffering and convenience layer on top of the posix file interface (close, open, stat, read, write, etc.).

For documentation on the latter, see man 2 x, e.g., man 2 open.

Even after handing off to the file system after a write or a close, it is up to the file system drivers to actually make the data visible and consistent. I.e., the language of the standard is not violated by the implementation of fclose(), or even the language of close(), but the definition of
"A successful call to the fclose function causes the stream pointed to by stream to be &#64258;ushed and the associated &#64257;le to be closed. Any unwritten buffered data for the stream are delivered to the host environment to be written to the &#64257;le; any unread buffered data are discarded."
is ambiguous in many ways.

[edit]
As for which one you should use, as usual, it depends.

-If you feel you need to control exactly what goes into OS-land and when, use the raw system calls.
-Elif you're writing C, use stdio calls (e.g., fopen())
-Elif you're writing C++, use iostream objects.

[/edit]
 
Last edited:

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Everyone else kind of mentioned the same thing, but as for leaving the file open there is probably a limit on the number of open file streams (though I'm not certain). In general, in my code, I don't close a file stream until I am completely done with the file.

As for reading and writing at the same time, you just described the basic producer/consumer problem (even more so if you thread this application).

Finally, as degibson said, FILE* for C, and fstream objects for C++. There is little reason use FILE* in C++ as the fstream makes your life a little easier with virtually no overhead.

-Kevin