[MonetDB-users] large COPY INTO jobs vs. INSERT

Hi, I am working with MonetDB v 5.8.2 on a 32 bit machine with 2GB of RAM. I have a 20 CSV files, each containing 200,000 rows and 59 columns. I want to import all of them into one table. When I start running 'COPY INTO's to do that, everything goes well untill the 15th file (which should bring the table up from 2.8 million rows up to 3 million). The 15th file fails. When working with the MAPI interface, the database ends up being corrupt - but works again if I remove the logs. When trying the same directly through mclient.exe, it also fails at around the 15th file, and gives this error: !SQLException:sql.bind:Cannot access descriptor !ERROR: GDKload: cannot read: name=11\1100, ext=tail, 400000 bytes missing. !OS: The operation completed successfully. !ERROR: GDKload: failed name=11\1100, ext=tail 0 tuples I tried two things: 1. I ran 16 'COPY INTO', but this time into 16 different tables. Then I ran INSERT sqls on all of them into one target table. That worked fine, and I passed the 3 Million barrier I ran into with COPY INTO. 2. I tried the same on a 64 bit machine - Everything worked just fine with 'COPY INTO' So what does all of this mean? How do I interpret the error message? Can I / Should I expect from a 32 bit machine to be able to handle what I wanted it to? And if it's a memory issue - then how come everything worked with INSERTs but not with COPY INTOs. Thanks, David. -- View this message in context: http://www.nabble.com/large-COPY-INTO-jobs-vs.-INSERT-tp21351353p21351353.ht... Sent from the monetdb-users mailing list archive at Nabble.com.
participants (1)
-
David Svenson