I don't know how the APPEND FROM function works, but if it is trying to pull the entire file into memory, then a 4GB file may be causing memory issues.
James
I don't know how the APPEND FROM function works, but if it is trying to pull the entire file into memory, then a 4GB file may be causing memory issues.
James
James Bott wrote:I don't know how the APPEND FROM function works, but if it is trying to pull the entire file into memory, then a 4GB file may be causing memory issues.
James
plase try in vfp import using append from
if this work the fault is in [x]harbour
salu2
2. By the way DBF file can not be larger than 4GB ( except in recent versions of ADS). Note: These limitations are dependent on limitations of 32-bit. Similar limits could apply to csv an sdf too.
Can SQL files be larger than 4GB or is 32-bit limited to 4GB files of any type? Or, can the server be 64bit and the workstations be 32bit?
Beginning with Advantage 8.0 the artificial 4 GB limit was removed. To use this feature only the server needs to be upgraded. When using DBF tables over 4GB in size, you must use Advantage Proprietary Locking.
André:
The file is not corrupted.
James:
I think there must be 32 bit addressing issues. Before you posted your suggestion, I tried splitting the file with the low level file I/O functions and it does not work.
Rao:
Let me know how can I send you the file (400MB zip).
Thank you all guys !
I think there must be 32 bit addressing issues. Before you posted your suggestion, I tried splitting the file with the low level file I/O functions and it does not work.
James, guys:
The problem is that, as Mr. Rao & I did, if you create a new text file with the first 10 records, everything works perfectly. With the whole file (4.61GB), it does not work. Splitting the file into multiple parts using the low level functions, does not work. APPEND FROM does not work.
Thank you guys.
Splitting the file into multiple parts using the low level functions, does not work. APPEND FROM does not work.
Hunter,
Can you share the file in some cloud based storage like googledrive, dropbox or any other one so we can download it?
I don't know if anyone has pointed this out but the size of the csv file does not directly relate to the size of the dbf.
The record size of the example dbf is 7908 bytes. The records size of the csv is much smaller because the fields are not fixed length and many of them are empty all together.
So even if you split the csv file in half, there is a possibility that when it is appended into dbf, the dbf could be over 4gb limit.
If you take 4gb and divide it by 7908 you get somewhere around 500,000 records. So if 1/2 of the csv file contains more than 500,000 records, you could be reaching the dbf size limit.
André, James, Rao:
Here's the link:
https://drive.google.com/folderview?id= ... sp=sharing
James:
I still have not tried to convert it to DBF. I wrote a small program to split the text file in 1,000,000 lines pieces. No success. The same program (splitter) works on a test file of just 10 lines.
The link shows an empty folder--at least for me.
Also, it sounds like you are saying that your splitter program didn't work for 1 million records. Is that correct?
James
1 million records? The dbf with 7908 bytes per record would not hold that many before crashing. It would not hold much more than 500,000 records before reaching the 4gb limit.