
Hi all, I am trying to copy a table from a postgresql database into monetdb, but it fails, with monetdb filling the disks. I am using latest version of Monetdb (Feb2013-SP3) on Ubuntu Linux, with 32 GB Ram and SSD disks. To copy into Monetdb, I have a java program that runs mclient as: mclient --language=sql --databas=database --host=host --user=user /tmp/command.ctl with command.ctl : COPY nnn RECORDS INTO table FROM STDIN USING DELIMITERS'|', '\n', '\' NULL AS '' Then the Postgresql resultset is read from jdbc, formatted and written into STDIN. So far so good, it work with my dimensions tables (small), and with a 295 millions rows fact table that has 109 columns. This fact table takes 86 GB on the disk for Postgresql storage (plus indices). I have another form of the fact table, with less rows (184 millions) but more columns (505). While the first form of the table has a measureId column and only one value column, the other form of the table has one value column per measure. (It is a "sum(case when measureId=X then value else null ... group by" version of the first table). Loading this "columnar" table fails, with monetdb filling the disks (something like 500 GB was left before the loading process) and quitting (not sure what happens exactly here on the server, but I get a broken pipe error in java, and the monetdb database is lost). The table size on disk for Postgresql is 37 GB. Using VectorWise, the table loads, and take only 7 GB on disks, but the loading process (very similar to what we do with MonetDB) takes a very long time( 25 hours), much longer than what happens with the 109 columns tables. I am trying to make sense of that and understand why this table is giving a (very) hard time to monetdb (and somehow VectorWise as well). Does anyone have an explanation ? thanks, Franck